Lectures On Finite Element Method - PDFCOFFEE.COM (2024)

Anders Logg

Kent-Andre Mardal

Editors

Lectures on the Finite Element Method

Contents

1

The finite element method 1.1 A simple model problem . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Solving Poisson’s equation using the finite element method . . . 1.3 Solving the Poisson equation with FEM using abstract formalism 1.4 Galerkin orthogonality . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

1 1 2 5 6

2

A short look at functional analysis and Sobolev spaces 2.1 Functional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 9 14

3

Crash course in Sobolev Spaces 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . 3.2 Sobolev spaces, norms and inner products . . . 3.3 Spaces and sub-spaces . . . . . . . . . . . . . . . 3.4 Norms and Semi-norms . . . . . . . . . . . . . . 3.5 Examples of Functions in Different Spaces . . . 3.6 Sobolev Spaces and Polynomial Approximation 3.7 Eigenvalues and Finite Element Methods . . . . 3.8 Negative and Fractional Norms . . . . . . . . . . 3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

17 17 17 18 19 19 19 21 22 25

4

Finite element error estimate 4.1 Ingredients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Error estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Adaptivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27 27 29 33

5

Finite element function spaces 5.1 The finite element definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Common elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 37 43

6

Discretization of a convection-diffusion problem 6.1 Introduction . . . . . . . . . . . . . . . . . . . . 6.2 Streamline diffusion/Petrov-Galerkin methods 6.3 Well posedness of the continuous problem . . 6.4 Error estimates . . . . . . . . . . . . . . . . . . 6.5 Exercises . . . . . . . . . . . . . . . . . . . . . .

47 47 50 53 55 56

3

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . .

. . . . .

Contents

4 7

8

9

Stokes problem 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Finite Element formulation . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Examples of elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Stabilization techniques to circumvent the Babuska-Brezzi condition 7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

57 57 59 63 65 66

Efficient Solution Algorithms: Iterative methods and Preconditioning 8.1 The simplest iterative method: the Richardson iteration . . . . . . . 8.2 The idea of preconditioning . . . . . . . . . . . . . . . . . . . . . . . 8.3 Krylov methods and preconditioning . . . . . . . . . . . . . . . . . 8.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

69 69 74 75 82

Linear elasticity and singular problems 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 The operator ∇ · e and rigid motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85 85 86 90

10 Finite element assembly 10.1 Local to global mapping ι T 10.2 The element matrix A T . . . 10.3 Affine mapping . . . . . . . 10.4 How do we compute A T ? .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

93 93 94 95 95

11 The finite element method for time-dependent problems 97 11.1 The FEM for u˙ = f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 11.2 The FEM for u˙ + A(u) = f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 References

103

Preface

5

Acknowledgement Miro, Solving, Ingeborg, Mikkel have all done a great deal. Insert text her.

1

The finite element method

By Anders Logg, Kent–Andre Mardal

1.1

A simple model problem

Consider, in a domain Ω ⊂ Rd , the Poisson equation

−∇ · (κ ∇u) = f u = u0 −κ ∇ u · n = g

in Ω, on ΓD ⊂ ∂Ω,

(1.1)

on ΓN ⊂ ∂Ω,

where u = u( x ) is some unknown field, κ : Ω → R(d×d) is some given coefficient matrix and f = f ( x ) is a given source function. The boundary ∂Ω of Ω is a union of two subboundaries, ∂Ω = ΓD ∪ ΓN . where ΓD is the Dirichlet boundary and ΓN is the Neumann boundary. The Dirichlet boundary condition, u = u0 , specifies a prescribed value for the unknown u on ΓD . The Neumann boundary condition, −κ ∇u · n = g, specifies a prescribed value for the (negative) normal derivative of u on ΓN . We often call the Dirichlet boundary condition an essential boundary condition, while we call Neumann boundary condition a natural boundary condition. Let us look at one of the many examples where the equations (4.55) arises. Let u = u( x ) be the temperature in a body Ω ⊂ Rd at a point x in the body, let q = q( x ) be the heat flux at x, let f be the heat source and let ω ⊂ Ω be a small test volume. Conservation of energy gives dE = dt

Z

q · n ds −

Z

∂ω

f dx = 0,

(1.2)

ω

that is, the outflow of the energy over the boundary ∂ω is equal to the energy emitted by the heat source function f . Fourier’s law relates the heat flux to the temperature in the following way: q = −κ ∇u.

(1.3)

This gives us Z

−κ ∇u · n ds = ∂ω

Z

f dx. ω

1

(1.4)

Chapter 1. The finite element method

2

Figure 1.1: Sketch of the domain Ω and the two subboundaries ΓD and ΓN .

By the Gauss theorem, Z

−κ ∇u · n ds =

Z ∂ω

∇ · (κ ∇u) dx =

ω

Z Zω

∇ · (−κ ∇u) dx

(1.5)

f dx.

(1.6)

ω

Equation (1.6) holds for all test volumes ω ⊂ Ω. Thus, if u, κ and f are regular enough, we obtain Z

(−∇ · (κ ∇u) − f ) dx = 0 ∀ ω ⊂ Ω

(1.7)

ω

−∇ · (κ ∇u) = f

in Ω.

(1.8)

The Boundary conditions of this problem becomes u = u0

−κ ∇ u · n = g

on ΓD on ΓN

(1.9)

(recall that q = −κ ∇u). This is illustrated in Figure 1.2. If we choose the special case where κ = 1, we obtain the more standard Poisson equation

−∆u = f

in Ω.

(1.10)

u = u0

on ΓD

(1.11)

on ΓN .

(1.12)

Then, the boundary conditions becomes

1.2

∂u =g ∂n

Solving Poisson’s equation using the finite element method

Solving a PDE using the finite element method is done in four steps: 1. Strong form,

Chapter 1. The finite element method

3

Figure 1.2: Sketch of the domain Ω and the two subboundaries ΓD and ΓN .

2. Weak (variational) form, 3. Finite element method, 4. Solution algorithm. Let us go through these four steps for the Poisson problem. 1.2.1

Strong form of Poisson’s equation

−∇ · (κ ∇u) = f u = u0 −κ ∇ u · n = g Recall that ∇u · n = 1.2.2

in Ω, on ΓD ⊂ ∂Ω,

(1.13)

on ΓN ⊂ ∂Ω.

∂u ∂n .

Weak form of Poisson’s equation

To obtain the weak form we integrate (sometimes integration by parts is needed) the product of the ˆ where Vˆ is called a test space: strong form of the equation multiplied by a test function v ∈ V, Z Ω

Z Ω

−∇ · (κ ∇u)v dx =

κ ∇u · ∇v dx −

Z

κ ∂Ω

∂u v ds = ∂n

Z Ω

Z Ω

f v dx

∀ v ∈ Vˆ

(1.14)

f v dx

ˆ ∀ v ∈ V.

(1.15)

Here we have done integration by parts using that Z Ω

(∇q)w dx =

Z

(q · n)w ds − ∂Ω

Z Ω

q(∇w) dx,

(1.16)

which in our case becomes Z Ω

−∇ · (κ ∇u)v dx =

Z

−κ ∂Ω

∂u v ds + ∂n

Z Ω

κ ∇u · ∇v dx.

(1.17)

Chapter 1. The finite element method

4

Letting v = 0 on the Dirichlet boundary, ΓD , the integral over the boundary becomes Z

−κ ∂Ω

∂u v ds = ∂n

Z ΓN

−κ

∂u v ds = ∂n

Z ΓN

gv ds.

(1.18)

ˆ ∀ v ∈ V.

(1.19)

We have arrived at the follwing problem: find u ∈ V such that Z Ω

κ ∇u · ∇v dx =

Z Ω

f v dx −

Z ΓN

gv ds

The test space Vˆ is defined by 1 Vˆ = H0,Γ (Ω) = {v ∈ H 1 (Ω) : v = 0 on ΓD } D

(1.20)

and the trial space V, containing the unknown function u, is defined similar to Vˆ but with a shifted Dirichlet condition: V = Hu10 ,ΓD (Ω) = {v ∈ H 1 (Ω) : v = u0 on ΓD }. (1.21)

1.2.3

The finite element method for Poisson’s equation

We discretize the variational problem (1.19) by looking for a solution in a discrete trial space and using a discrete test function. The finite element problem is: find uh ∈ Vh ⊂ V such that Z Ω

κ ∇uh · ∇v dx =

Z Ω

f v dx −

Z ΓN

gv ds

ˆ ∀ v ∈ Vˆh ⊂ V,

(1.22)

ˆ respectively. where Vh and Vˆh are discrete subspaces of V and V,

1.2.4

Solution algorithm

Our question is now: How do we solve the discrete variational problem (1.22)? We introduce a basis for V and Vh , and make an Anzats: N

uh ( x ) =

∑ U j φ j ( x ),

(1.23)

j =1

where φj : Ω → R,

j = 1, . . . , N,

(1.24)

is basis for Vh . Inserting this into equation (1.22) and letting v = φˆ i , i = 1, . . . , N, we obtain ! Z Z Z N

κ∇

∑ Uj φj

· ∇φˆ i dx =

j =1 N

∑ Uj

j =1

Z Ω

κ ∇φj · ∇φˆ i dx =

Z Ω

f φˆ i dx −

f φˆ i dx −

ΓN

Z ΓN

gφˆ i ds,

i = 1, 2, . . . , N,

(1.25) gφˆ i ds,

i = 1, 2, . . . , N.

Chapter 1. The finite element method

5

We recognize this as a system of linear equations: N

∑ Aij Uj = bi ,

i = 1, 2, . . . , N,

j =1

(1.26)

AU = b, where Aij = bi =

Z ZΩ Ω

κ ∇φj · ∇φˆ i dx, f φˆ i dx −

Z ΓN

gφˆ i ds.

1.3

Solving the Poisson equation with FEM using abstract formalism

1.3.1

The problem written in strong form

(1.27)

The strong form of the Poisson equation written as a linear system reads Au = f ,

(+ BCs ),

(1.28)

where A is a discrete differential operator. 1.3.2

The problem written in weak (variational) form

Let V be a Hilbert space with inner product h·, ·i, then

h Au, vi = h f , vi

(1.29)

Define a(u, v) = h Au, vi, L ( v ) = h f , v i,

(1.30)

where a is a bilinear form (not necessarily an inner product) and L is a linear form (a functional): a : V × Vˆ → R, L : Vˆ → R.

(1.31)

The variational problem becomes: find u ∈ V such that a(u, v) = L(v) 1.3.3

ˆ ∀ v ∈ V.

(1.32)

Finite element method

In the finite element problem, we look for a discrete solution: find uh ∈ Vh such that a(uh , v) = L(v)

∀ v ∈ Vˆh .

(1.33)

Chapter 1. The finite element method

6 1.3.4

Solution algorithm

Let {φi }iN=1 be a basis for Vh . Make an Anzats N

uh ( x ) =

∑ U j φ j ( x ).

(1.34)

j =1

Inserting this to the variational form, it follows ! N

∑ Uj φj , φˆ i

a

= L φˆ i ,

i = 1, 2, · · · , N,

j =1

(1.35)

N

∑ Uj a(φj , φˆ i ) = L

φˆ i ,

i = 1, 2, · · · , N.

j =1

As before, uh may be computed by solving a linear system N

∑ Aij Uj = bi ,

i = 1, 2, . . . , N,

(1.36)

j =1

AU = b, where Aij = a(φj , φˆ i ), bi = L φˆ i .

1.4

(1.37)

Galerkin orthogonality

We will now show Galerkin orthogonality. First, we know that a(u, v) = L(v) a(uh , v) = L(v)

∀ v ∈ V, ∀ v ∈ Vh ⊂ V.

(1.38)

Using these results and the linearity of the bilinear form, we get a(u − uh , v) = a(u, v) − a(uh , v) = L(v) − L(v) = 0

∀ v ∈ Vh ,

(1.39)

or written symbolically u − uh ⊥ a Vh .

(1.40)

This property is called Galerkin orthogonality. The error, e = u − uh , is orthogonal (in the sense of the bilinear form a) to the test space Vh . Thus, uh is the best possible approximation of u in Vh . We will continue this concept in the next chapter.

Chapter 1. The finite element method

Figure 1.3: The finite element solution uh ∈ Vh ⊂ V is the projection of u ∈ V in the sense of the bilinear form a onto the subspace Vh and is consequently the best possible approximation of u in Vh .

7

2

A short look at functional analysis and Sobolev spaces

By Anders Logg, Kent–Andre Mardal

The finite element method (FEM) is a general framework for numerical solution of PDEs. FEM is written in the language of functional analysis, therefore we need to introduce basic concepts and notations from functional analysis and Sobolev spaces. The fundamental idea is that functions are vectors in a function space which is a vector space. The properties of a vector space is briefly reviewed below. Then we may equip the spaces with norms and inner-products which allow us to quantify, e.g., magnitudes and differences between functions. A fundament mathematical difficulty is, however, that the function spaces typically will be infinite dimensional in the continuous setting, but this difficulty will not be addressed here.

2.1

Functional analysis

Definition 2.1. Vector space (over a field F ∈ R) A vector space is a set V equipped with, • addition + : V × V → V • multiplication · : F × V → V Where + and · satisfy the following conditions 1. + is commutative:

v+u = u+v

2. + is associative:

u + (v + w) = (u + v) + w

3. additive identity:

∃ 0 ∈ V such that v + 0 = 0 + v = v

4. additive inverse:

∃ − v ∈ V such that v + (−v) = (−v) + v = 0

5. · is distributive:

c · (u + v) = c · u + c · v

6. · is distributive:

(c + d) · v = c · v + d · v

7. · is associative:

c · (d · v) = (c · d) · v

8. multiplicative identity:

1·v = v

for all u, v, w ∈ V and c, d ∈ R. 9

Chapter 2. A short look at functional analysis and Sobolev spaces

10 Examples: 1. V = R 2. V = R3

3. V = R N , [ x1 , . . . , x N ] + [y1 , . . . , y N ] = [ x1 + y1 , . . . , x N + y N ] and α[ x1 , . . . , x N ] = [αx1 , . . . , αx N ] 4. V = {v : [0, 1] → R | v is continuous} 5. V = {v : [0, 1] → R | v( x ) 6 1,

∀ x ∈ [0, 1]}, NOT a vector space!

Definition 2.2. Inner product space (over a field F = R) An inner product space is a vector space with an inner product, a map,

h·, ·i : V × V → F, satisfying the following conditions: 1.

hv, wi = hw, vi ∀ v, w ∈ V

2.

hαv, wi = α hv, wi ∀ v ∈ V and ∀ α ∈ F hu + v, wi = hu, wi + hv, wi ∀ u, v, w ∈ V

3.

hv, vi > 0 with

hv, vi = 0 iff v = 0

(conjugate symmetry) ) (linearity) (positive definite)

Examples: 1. V = R N ,

hv, wi = ∑iN=1 vi wi

2. V = `2 ,

hv, wi = ∑i∞=1 vi wi R hv, wi = Ω vw dx

3. V = C ∞ (Ω),

`2 is the space of all sequences (or infinite vectors) that satisfy ∑i v2i < ∞. Definition 2.3. Orthogonality Let V be an inner product space. Two vectors u, v ∈ V are said to be orthogonal if

hv, wi = 0. Examples: 1. V = R3 ,

v = (1, 2, 3),

2. V = P 2 ([−1, 1]),

u = 1,

w = (3, 0, −1)

v = x,

w = 21 (3x2 − 1)

Definition 2.4. Normed vector space (over a field F) A normed vector space is a vector space with a norm, a map,

k · k : V → R,

(Legendre polynomials)

Chapter 2. A short look at functional analysis and Sobolev spaces

11

satisfying the following conditions:

kαvk = |α|kvk, ∀ v ∈ V and ∀ α ∈ F (Positive homogeneity) 2. ku + vk 6 kuk + kvk, ∀ u, v ∈ V (triangle inequality) 3. kvk = 0 ⇒ v = 0 (point separation) 1.

Examples: 1. V = R N ,

p 1/p kvk p = ∑iN=1 vi ,

16p 0, such that kvm − vn k < e ∀ m, n > N. Examples: 1. V = R,

k v k = | v |,

vn =

1 n

2. V = R,

k v k = | v |,

vn =

sin n n

3. V = C ([0, 1]),

kvk = kvk∞ ,

vn ( x ) = ∑in=0

xi i!

4. V = C ([−1, 1]),

  −1, vn ( x ) = nx,  1,

x ∈ [−1, − n1 ] x ∈ (− n1 , n1 ) x ∈ [ n1 , 1]

R1 This sequence is Cauchy in the L1 -norm, kvk1 = −1 |v( x )| dx, but not Cauchy in the max norm, kvk∞ = maxx∈[−1,1] |v( x )|, because C ([−1, 1]) with k · k∞ is not complete. Figure 2.1 and 2.2 show the Cauchy sequence for example 1 and 2. Definition 2.6. Completeness A (metric) space, V, is complete if all Cauchy sequences converge to a point in V. Definition 2.7. Banach space A Banach space is a complete normed vector space. Definition 2.8. Hilbert space A Hilbert space is a complete normed inner product space. 1 Can

be generalized to metric spaces, d(vm , vn ) < e

12

Chapter 2. A short look at functional analysis and Sobolev spaces

Figure 2.1: Cauchy sequence:

Figure 2.2: Cauchy sequence:

1 n

for n = 1, . . . , 100.

sin n n

for n = 1, . . . , 100.

Chapter 2. A short look at functional analysis and Sobolev spaces

13

Vector space Normed vector space Inner

Banach

product

space

space

HILBERT SPACE Figure 2.3: Venn diagram of the different spaces.

Definition 2.9. (Continuous) Dual space Let V be a normed vector space. The dual space V 0 (sometimes denoted V ? ) is the space of all continuous, linear functionals on V: V 0 = {l : V → R | kl k < ∞} where, kl k = sup |l (v)| k v k 61

So far we have looked at a lot of definitions, let us now consider some important results. Theorem 2.1. Cauchy–Schwartz inequality Let V be an inner product space. Then

|hv, wi| 6 kvk · kwk ∀ v, w ∈ V. Theorem 2.2. Banach fixed-point theorem Let V be a Banach space and let T: V→V be a continuous mapping on V, that is,

∃ M < 1 : k T (v) − T (w)k 6 Mkv − wk ∀ v, w ∈ V. ¯ Then ∃! v¯ ∈ V, such that T v¯ = v. Examples: 1. V = R,

Tv = 2v ,

2. V = R+ ,

Tv =

v¯ = 0

v+2/v 2 ,

v¯ =

2

Theorem 2.3. Riesz representation theorem Let H be a Hilbert space and let H 0 denote its dual space. Then for all l ∈ H 0 there exists a unique element lˆ ∈ H, such that ˆ vi ∀ v ∈ H l (v) = hl, Theorem 2.4. Integration by parts in n–dimensions

Chapter 2. A short look at functional analysis and Sobolev spaces

14

Let Ω ∈ Rn and let v and w be functions in H 1 (Ω). Then, Z Ω

∂v w dx = − ∂xi

Z Ω

∂w v dx + ∂xi

Z

v w ni dS, ∂Ω

where ni it the i’th normal component.

2.2

Sobolev spaces

We will now turn our attention to the ralated topic of Sobolev spaces. Definition 2.10. The L2 (Ω) space Let Ω be an open subset of Rn , with piecewise smooth boundary, then L2 (Ω) is defined by L2 ( Ω ) = { v : Ω → R |

Z Ω

v2 dx < ∞}

Examples: 1. v( x ) = 2. v( x ) =

√1 , x 1

1

x4

,

Ω = (0, 1),

v 6 ∈ L2 ( Ω )

Ω = (0, 1),

v ∈ L2 ( Ω )

Theorem 2.5. L2 with hv, wi =

R

vw dx is a Hilbert space.

Definition 2.11. Weak derivative (first order) Let v ∈ L2 (Ω). The weak derivative of v (if it exists), is a function Z Ω

Z

∂v φ dx = − ∂xi

v

∂φ dx, ∂xi

∂v ∂xi

∈ L2 (Ω) satisfying,

∀ φ ∈ C0∞ (Ω).

Definition 2.12. Weak derivative (general order) Let v ∈ L2 (Ω). The weak derivative of v (if it exists), is a function ∂α v ∈ L2 (Ω) satisfying Z Ω

∂α v φ dx = (−1)|α|

where ∂α φ =

Z Ω

∀ φ ∈ C0∞ (Ω)

v ∂α φ dx,

∂|α| . ∂ α1 x 1 ∂ α2 x 2 . . . ∂ α n x n

Lemma 2.1. A weak derivative (if it exist), is unique. Lemma 2.2. A (strong) derivative (if it exist), is a weak derivative. Definition 2.13. The Sobolev space H m The sobolev space H m is the subspace of functions v in L2 (Ω), which possess weak derivatives ∂α for |α| 6 m. The corresponding norm is s

kvk H k = and seminorm

s

|v| H k =

Z

| α |6k Ω

Z

|α|=k Ω

|∂α v|2 dx ≡

|∂α v|2 dx ≡

s

| α |6k

s

|α|=k

k∂α vk2L2 (Ω)

k∂α vk2L2 (Ω) .

Chapter 2. A short look at functional analysis and Sobolev spaces Theorem 2.6. H 1 is a Hilbert space

hv, wi = Theorem 2.7. Poincaré inequality Let v ∈ H01 (Ω). Then, where C depends only on Ω.

Z Ω

vw dx +

Z Ω

∇v · ∇w dx

k v k L2 ( Ω ) 6 C | v | H 1 ( Ω ) ,

15

3

Crash course in Sobolev Spaces

By Anders Logg, Kent–Andre Mardal

3.1

Introduction

Sobolev spaces are fundamental tools in the analysis of partial differential equations and also for finite element methods. Many books provide a detailed and comprehensive analysis of these spaces that in themselves deserve significant attention if one wishes to understand the foundation that the analysis of partial differential equations relies on. In this chapter we will however not provide a comprehensive mathematical description of these spaces, but rather try to provide insight into their use. We will here provide the definition of these spaces. Further we will show typical functions, useful for finite element methods, that are in some but not all spaces. We also show how different norms capture different characteristics.

3.2

Sobolev spaces, norms and inner products

Sobolev spaces are generalizations of L p spaces. L p spaces are function spaces defined as follows. Let u be a scalar valued function on the domain Ω, which for the moment will be assumed to be the unit interval (0, 1). Then

kuk p = (

Z 1 0

|u| p dx )1/p .

L p (Ω) consists of all functions for which kuk p < ∞. Sobolev spaces generalize L p spaces by also including the derivatives. On the unit interval, let

kuk p,k = (

Z

∂u

∑ |( ∂x )i | p dx)1/p .

(3.1)

Ω i ≤k

p

p

Then the Sobolev space Wk (Ω) consists of all functions with kuk p,k < ∞. Wk is a so-called Banach space - that is a complete normed vector space. The corresponding semi-norm, that only include the highest order derivative is Z ∂ |u| p,k = ( ∑ |( )i u| p dx )1/p . (3.2) ∂x Ω i =k The case p = 2 is special in the sense that (3.1) defines an inner product. The Banach space then forms a Hilbert space and these named with H in Hilbert’s honor. That is H k (Ω) = W 2,k (Ω). For the most part, we will employ the two spaces L2 (Ω) and H 1 (Ω), but also H 2 and H −1 will be used. The difference between the norm in L2 (Ω) and H 1 (Ω) is illustrated in the following example. 17

Chapter 3. Crash course in Sobolev Spaces

18

Figure 3.1: Left picture shows sin(πx ) on the unit interval, while the right picture shows sin(10πx ).

Example 3.1. Norms of sin(kπx ) Consider the functions uk = sin(kπx ) on the unit interval. Figure 3.1 shows the function for k = 1 and k = 10. Clearly, the L2 and L7 behave similarly in the sense that they remain the same as k increases. On the other hand, the H 1 norm of uk increases dramatically as k increases. The following code shows how the norms are computed using FEniCS. Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

from dolfin import * N = 10000 mesh = UnitInterval(N) V = FunctionSpace(mesh, "Lagrange", 1) for k in [1, 100]: u_ex = Expression("sin(k*pi*x[0])", k=k) u = project(u_ex, V) L2_norm = sqrt(assemble(u**2*dx)) print "L2 norm of sin(%d pi x) %e " % (k, L2_norm) L7_norm = pow(assemble(abs(u)**7*dx), 1.0/7) print "L7 norm of sin(%d pi x) %e " % (k, L7_norm) H1_norm = sqrt(assemble(u*u*dx + inner(grad(u), grad(u))*dx )) print "H1 norm of sin(%d pi x) %e" % (k, H1_norm)

k\norm 1 10 100

L2 0.71 0.71 0.71

L7 0.84 0.84 0.84

H1 2.3 22 222

Table 3.1: The L2 , L7 , and H 1 norms of sin(kπx ) for k=1, 10, and 100.

3.3

Spaces and sub-spaces

The Sobolev space with k derivatives in L2 (Ω) was denoted by H k (Ω). The subspace of H k with k − 1 derivatives equal to zero at the boundary is denoted H0k (Ω). For example, H01 (Ω) consists of

Chapter 3. Crash course in Sobolev Spaces

19

all functions in H 1 that are zero at the boundary. Similarly, we may also defined a subspace Hg1 (Ω) which consists of all functions in H 1 (Ω) that are equal to the function g on the boundary. Mathematically, it is somewhat tricky to defined that a function in H 1 is equal to another function as it can not be done in a pointwise sense. This difficulty is resolved by the concept of a trace usually denoted by T. The concept of a trace is tricky, for example if T takes a function u in H 1 (Ω) and restrict it to ∂Ω then Tu 6∈ H 1 (∂Ω). In fact, in general we only have Tu ∈ H 1/2 (∂Ω).

3.4

Norms and Semi-norms

The norm k · k p,k defined in 3.1 is a norm which means that kuk p,k > 0 for all u 6= 0. On the other hand | · | p,k is a semi-norm, meaning that |u| p,k ≥ 0 for all u. The space H 1 (Ω) is defined by the norm

k u k1 = (

Z Ω

u2 + (∇u)2 dx )1/2

and contains all functions for which kuk1 ≤ ∞. Often we consider subspaces of H 1 satisfying the Dirichlet boundary conditions. The most common space is denoted H01 . This space contains all functions in H 1 that are zero on the boundary. The semi-norm | · |1 defined as

| u |1 = (

Z Ω

(∇u)2 dx )1/2

is a norm on the subspace H01 . In fact, as we will see later, Poincare’s lemma ensures that k · k1 and | · |1 are equivalent norms on H01 (see Exercise 3.5.

3.5

Examples of Functions in Different Spaces

The above functions sin(kπx ) are smooth functions that for any k are infinitely many times differentiable. They are therefore members of any Soblev space. On the other had, the step function in upper picture in Figure 3.2 is discontinuous in x = 0.2 and x = 0.4. Obviously, the function is in L2 (0, 1), but the function is not in H 1 (0, 1) since the derivative of the function consists of Dirac’s delta functions1 that are ∞ at x = 0.2 and −∞ in x = 0.4. The hat function in the lower picture in Figure 3.2 is a typical first order finite element function. The function is in both L2 (0, 1) and H 1 (0, 1) (see Exercise 3.3). In general, functions in H q are required to be in C q−1 , where C k is the class where the k’th derivatives exist and are continuous.

3.6

Sobolev Spaces and Polynomial Approximation

From Taylor series we know that a f ( x + h) may be approximated by f ( x ) and a polynomial in h that depends on the derivatives of f . To be precise,

| f ( x + h) − ( Pk f )( x )| ≤ O(hk+1 ). Dirac’s delta function δx is 0 everywhere except at x where it is ∞ and L1 (Ω) but not in L2 (Ω). 1 The

R

Ω δx dx

= 1. Hence, Dirac’s delta function is in

20

Chapter 3. Crash course in Sobolev Spaces

Figure 3.2: The upper picture shows a piecewise function, discontinuous at x = 0.2 and x = 0.2, while the lower picture shows a linear function that is continuous.

Chapter 3. Crash course in Sobolev Spaces

21

Here, ( Pk f )( x ) is a polynomial of degree k in h, f (n) denotes the n’th derivative of f , and the error will be of order k + 1 in h. To be precise, k

( Pk f )( x ) = f ( x ) +

n =1

f (n) ( x ) n h . n!

In general, approximation by Taylor series bears strong requirement on the smoothness of the solution which needs to be differentiable in a point-wise sense. However, in Sobolev spaces we have the very usefull approximation property

|u − Pk u|m,p ≤ Chk−m |u|k,p

for m = 0, 1, . . . , k and k ≥ 1.

This property is used extensively in analysis of finite element methods. The above approximation property is often called the Bramble-Hilbert lemma for k ≥ 2 and the case k = 1 was included by a special interpolation operator by Clement, the so-called Clement interpolant. For proof, see e.g. Braess [2007], Brenner and Scott [2008].

3.7

Eigenvalues and Finite Element Methods

We remember that for −∆ on the unit interval (0, 1), the eigenvalues and eigenvectors are (πk)2 and sin(πkx ), k = 1, . . . , ∞, respectively. It is natural to expect that the eigenvalues in the discrete setting approximate the continuous eigenvalues such that the minimal eigenvalue is ≈ π 2 , while the maximal eigenvalue is ≈ π 2 /h2 , where k = 1/h is the largest k that may be represented on a mesh with element size h. Computing the eigenvalues of the finite element stiffness matrix in FEniCS as2 , Python code 1

A = assemble_system(inner(grad(u), grad(v))*dx, Constant(0)*v*dx, bc)

reveals that the eigenvalues are differently scaled. In fact, the minimal eigenvalue is ≈ π 2 h and that the maximal eigenvalue is ≈ π 2 /h. The reason is that the finite element method introduces a mesh-dependent scaling. To estimate the continuous eigenvalues we instead compute the eigenvalues of the generalized eigenvalue problem, Ax = λMx, (3.3) where A is the above mentioned stiffness matrix and M is the mass matrix (or the finite element identity matrix) Python code 1

M = assemble_system(inner(u*v*dx, Constant(0)*v*dx, bc)

Figure 3.3 shows the eigenvalues of −∆, A, and (3.3) based on the following code: Python code 1 2 3 4 5 6 7 8 9

from dolfin import * import numpy from scipy import linalg, matrix def boundary(x, on_boundary): return on_boundary for N in [100, 1000]: mesh = UnitIntervalMesh(N) V = FunctionSpace(mesh, "Lagrange", 1) 2 We

use the assemble_system function to enforce the Dirichlet condition in symmetric fashion.

Chapter 3. Crash course in Sobolev Spaces

22

Figure 3.3: A log-log plot of the eigenvalues of A, M−1 A, and −∆.

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

u = TrialFunction(V) v = TestFunction(V) bc = DirichletBC(V, Constant(0), boundary) A, _ = assemble_system(inner(grad(u), grad(v))*dx, Constant(0)*v*dx, bc) M, _ = assemble_system(u*v*dx, Constant(0)*v*dx, bc) AA = matrix(A.array()) MM = matrix(M.array()) k = numpy.arange(1, N, 1) eig = pi**2*k**2 l1, v l2, v

= linalg.eigh(AA) = linalg.eigh(AA, MM)

print "l1 min, max ", min(l1), max(l1) print "l2 min, max ", min(l2), max(l2) print "eig min, max ", min(eig), max(eig) import pylab pylab.loglog(l1[2:], linewidth=5) # exclude the two smallest (they correspond to Dirichlet cond)) pylab.loglog(l2[2:], linewidth=5) # exclude the two smallest again pylab.loglog(eig, linewidth=5) pylab.legend(["eig(A)", "eig(A,M)", "cont. eig"], loc="upper left") pylab.show()

From Figure 3.3 we see that that the eigenvalues of (3.3) and −∆ are close, while the eigenvalues of A is differently scaled. We remark that we excluded the two smallest eigenvalues in the discretized problems as they correspond to the Dirichlet conditions.

3.8

Negative and Fractional Norms

As will be discussed more thoroughly later, −∆ is a symmetric positive operator and can be thought of as a infinite dimensional matrix that is symmetric and positive. It is also know from Riesz representation theorem that if u solves the problem

−∆u = f , u = 0,

in Ω, on ∂Ω

Chapter 3. Crash course in Sobolev Spaces

23

then

| u | 1 = k f k −1 .

(3.4)

This implicitly define the H −1 norm, although the definition then requires the solution of a Poisson problem. For example, in the previous example where uk = sin(kπx ), we have already estimated that πk | u k |1 = √ and therefore kuk k−1 = |(−∆)−1 uk |1 = √ 1 . 2 2kπ Let us now generalize these considerations and consider a matrix (or differential operator) A which is symmetric and positive. A has positive and real eigenvalues and defines an inner product which may be represented in terms of eigenvalues and eigenfunctions. Let λi and ui be the eigenvalues and eigenfunctions such that Aui = λi ui Then, x may be expanded in terms of the eigenfunctions ui as x = ∑i ci ui , where ci = ( x, ui ), and we obtain ( x, x ) A = ( Ax, x ) = ( A ∑ ci ui , ∑ c j u j ) = (∑ λi ci ui , ∑ c j u j ) i

j

i

j

Because A is symmetric, the egenfunctions ui are orthogonal to each other and we may choose a normalized basis such that (ui , u j ) = δij . With this normalization, we simply obtain

k x k2A = ( x, x ) A = ( Ax, x ) = ( A ∑ ci ui , ∑ c j u j ) = ∑ λi c2i i

j

i

A generalization of the A−inner product (with corresponding norm) to a Aq −inner product that allow for both negative and franctional q is then as follows

k x k2A,q = ( x, x ) A,q = ∑ λi c2i . q

(3.5)

i

πk Clearly, this definition yields that |uk |1 = √ and kuk k−1 = √ 1 , as above. 2 2kπ As mentioned in Section 3.7, care has to be taken in finite element methods if the discrete eigenvalues are to correspond with the continuous eigenvalues. We will therefore detail the computation of negative and fractional norms in the following. Let λi and ui be the eigenvalues and eigenvectors of the following generalized eigenvalue problem

Aui = λi Mui

(3.6)

and let U be the matrix with the eigenvectors as columns. The eigenvalues are normalized in the sense that U T MU = I where I is the identity matrix. We obtain U T AU = Λ

or

A = MUΛ( MU ) T ,

where Λ is a matrix with the eigenvalues λi on the diagonal. Hence also in terms of the generalized eigenvalue problem (3.6) we obtain the A−norm as

k x k2A = x T MUΛ( MU )T x and we may define fractional and negative norms in the same manner as (3.5), namely that

k x k2A,M,q = x T MUΛq ( MU )T x.

Chapter 3. Crash course in Sobolev Spaces

24

Defining the negative and fractional norms in terms of eigenvalues and eigenvectors is convenient for small scale problems, but it is an expensive procedure because eigenvalue problems are computationally demanding. It may, however, be tractable on subdomains, surfaces, or interfaces of larger problems. We also remark that there are other ways of defining fractional and negative norms. For example, one often used technique is via the Fourier series, c.f. e.g. Rauch [1997]. These different definitions do in general not coincide, in particular because they typically have different requirement on the domain or boundary conditions. One should also be careful when employing the above definition with integer q > 1, in particular because boundary conditions requirements will deviate from standard conditions in the Sobolev spaces for q > 1. Example 3.2. Computing the H 1 , L2 , and H −1 norms Let as before Ω = (0, 1) and uk = sin(πkx ). Table 8.1 shows the H 1 , L2 , and H −1 norms as computed with (3.5) with q = 1, 0, and −1, respectively. Comparing the computed norms with the norms L2 and H 1 norms computed in Example 3.1, we see that the above definition (3.5) reproduces the H 1 and L2 norms with q = 1 and q = 0, respectively. We also remark that while the H 1 norm increases as k increases, the H −1 norm demonstrates a corresponding decrease. Below we show the code for computing these norms. k \norm 1 10 100

H1, q = 1 2.2 22 222

L2 , q = 0 0.71 0.71 0.71

H −1 , q = − 1 0.22 0.022 0.0022

Table 3.2: The L2 , L7 , and H 1 norms of sin(kπx ) for k=1, 10, and 100.

Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

from dolfin import * from numpy import matrix, diagflat, sqrt from scipy import linalg, random def boundary(x, on_boundary): return on_boundary mesh = UnitIntervalMesh(200) V = FunctionSpace(mesh, "Lagrange", 1) u = TrialFunction(V) v = TestFunction(V) bc = DirichletBC(V, Constant(0), boundary) A, M, AA MM

_ = assemble_system(inner(grad(u), grad(v))*dx, Constant(0)*v*dx, bc) _ = assemble_system(u*v*dx, Constant(0)*v*dx, bc) = matrix(A.array()) = matrix(M.array())

l, v = linalg.eigh(AA, MM) v = matrix(v) l = matrix(diagflat(l)) for k in [1, 10, 100]: u_ex = Expression("sin(k*pi*x[0])", k=k) u = interpolate(u_ex, V) x = matrix(u.vector().array()) H1_norm = pi*k*sqrt(2)/2 print "H1 norm of sin(%d pi x) %e (exact) " % (k, H1_norm) H1_norm = sqrt(assemble(inner(grad(u), grad(u))*dx))

Chapter 3. Crash course in Sobolev Spaces 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56

25

print "H1 norm of sin(%d pi x) %e (|grad(u)|^2) H1_norm = sqrt(x*AA*x.T) print "H1 norm of sin(%d pi x) %e (x A x’ ) W = MM.dot(v) H1_norm = sqrt(x*W*l*W.T*x.T) print "H1 norm of sin(%d pi x) %e (eig)

" % (k, H1_norm) " % (k, H1_norm)

" % (k, H1_norm)

print "" L2_norm = sqrt(2)/2 print "L2 norm of sin(%d pi x) %e (exact) L2_norm = sqrt(assemble(u**2*dx)) print "L2 norm of sin(%d pi x) %e |u|^2 L2_norm = sqrt(x*MM*x.T) print "L1 norm of sin(%d pi x) %e (x M x’ ) W = MM.dot(v) L2_norm = sqrt(x*W*l**0*W.T*x.T) print "L2 norm of sin(%d pi x) %e (eig)

" % (k, L2_norm) " % (k, L2_norm) " % (k, L2_norm)

" % (k, L2_norm)

print "" Hm1_norm = sqrt(2)/2/k/pi print "H^-1 norm of sin(%d pi x) %e (exact) " % (k, Hm1_norm) Hm1_norm = sqrt(x*W*l**-1*W.T*x.T) print "H^-1 norm of sin(%d pi x) %e (eig) " % (k, Hm1_norm) Hm1_norm = sqrt(x*MM*linalg.inv(AA)*MM*x.T) print "H^-1 norm of sin(%d pi x) %e (x inv(A) x’) " % (k, Hm1_norm)

Remark 3.8.1. Norms for |q| > 1. The norm (3.5) is well defined for any |q| > 1, but will not correspond to the corresponding Sobolev spaces. Remark 3.8.2. The standard definition of a dual norm Let (·, ·) A be an inner product over the Hilbert space V. The norm of the dual space is then defined by

k f k A∗ = sup

( f , v) . (v, v) A

k f k−1 = sup

( f , v) . (v, v)1

v ∈V

For example, the H −1 norm is defined as

v∈ H 1

3.9

Exercises

Exercise 3.1. Compute the H 1 and L2 norms of a random function with values in (0, 1) on meshes representing the unit interval of with 10, 100, and 1000 cells. Exercise 3.2. Compute the H 1 and L2 norms of sin(kπx ) on the unit interval analytically and compare with the values presented in Table 3.2. Exercise 3.3. Compute the H 1 and L2 norms of the hat function in Picture 3.2. Exercise 3.4. Consider the following finite element function u defined as ( u=

1 1 h x − h (0.5 − h ), 1 − h x + 1h (0.5 − h),

0,

x = (0.5 − h, 0.5) x = (0.5, 0.5 + h) elsewhere

Chapter 3. Crash course in Sobolev Spaces

26

That is, it corresponds to the hat function in Figure 3.2, where u(0.5) = 1 and the hat function is zero every where in (0, 0.5 − h) and (0.5 + h, 1). Compute the H 1 and L2 norms of this function analytically, and the L2 , H 1 and H −1 norms numerically for h = 10, 100 and 1000. Exercise 3.5. Let Ω = (0, 1) then for all functions in H 1 (Ω) Poincaré’s inequality states that

| u | L2 ≤ C |

∂u | 2 ∂x L

Use this inequality to show that the H 1 semi-norm defines a norm equivalent with the standard H 1 norm on H01 (Ω).

4

Finite element error estimate

By Anders Logg, Kent–Andre Mardal

4.1

Ingredients

We have used the FEM to compute an approximate solution, uh , of a PDE. Fundamental question: How large is the error e = u − uh ? To be able to estimate the error, we need some ingredients: 1. Galerkin orthogonality 2. Interpolation estimates 3. Coercivity (more generally: inf–sup) We will also state the Fundamental theorem of numerical analysis Theorem 4.1. Consistency and stability ⇔ convergence. 4.1.1

Galerkin orthogonality

Let us look at the "abstract" weak formulation of a PDE,

∀ v ∈ V.

a(u, v) = L(v)

(4.1)

Now we let uh ∈ Vh , where Vh is a finite dimensional function space,

∀ v ∈ Vh ⊂ V.

a(uh , v) = L(v)

(4.2)

By subtracting (4.2) from (4.1), we get the Galerkin orthogonality: a(u − uh , v) = 0 4.1.2

∀ v ∈ Vh ⊂ V.

(4.3)

Interpolation estimates

First, let us note that

ku − uh k > inf ku − vk, v∈Vh

(4.4)

for some norm. We need to be able to estimate infv∈Vh ku − vk or at least get a sharp upper bound. We will do this by estimating ku − vk for a particular (a good) choice of v! 27

Chapter 4. Finite element error estimate

28

Let πh u be a piecewise constant approximation of u( x ) (1D). Then for x ∈ ( xi−1 , xi ], from the theory of Taylor expansion, we have  x¯i z }| {  x i −1 + x i  Z x 0 + u (y) dy. u( x ) = u    2 x¯i 

|

{z

}

≡πh u

which leads to

|u − πh u| = |

Z x x¯i

u0 (y) dy | .

Let us consider the L2 –norm. Then,

ku − πh uk2L2 =

Z b a

(u − πh u)2 dx = ∑

Z x i x i −1

i

=∑

(u − πh u)2 dx

Z x Z x i

i

x i −1

x¯i

u0 (y) dy

2 dx

We multiply the integrand by one and use Cauchy–Schwartz inequality.

ku − πh uk = ∑ 2

Z x Z x i x¯i

x i −1

i

1 · u (y) dy

6∑

Z x i

=∑

Z x i

=∑

Z x i

6∑

hi 2

Z x Z x i i

=∑

h2i 2

Z x i

x i −1

i

i

1 6 2

|

Z x x¯i

x i −1

i

Z b a

x x¯i

x i −1

i

i

Z

|x −

x i −1

x i −1

2

1/2 Z x 1/2 !2 0 2 1 dy · (u (y)) dy dx 2

x¯i

12 dy| · |

Z x x¯i

x i −1 + x i |· 2

x i −1

dx

(u0 (y))2 dy| dx Z x x¯i

(u0 (y))2 dy dx

(u0 (y))2 dy dx

(u0 (y))2 dy

(h u0 (y))2 dy =

1 khu0 k2L2 , 2

where hi = xi − xi−1 and h = maxi hi . Thus, we have found an interpolation estimate 1 ku − πh uk L2 6 √ khu0 k L2 . 2

(4.5)

k D p (u − πh u)k L2 6 C ( p, q)khq+1− p D q+1 uk L2 ,

(4.6)

In general, one can prove that

Chapter 4. Finite element error estimate

29

where πh u is an approximation (interpolant) of degree q. C ( p, q) is a constant depending only on p and q. 4.1.3

Coercivity

Definition 4.1. Coercive A bilinear form a : H × H → R is called coercive if there exists a constant α > 0 such that a(v, v) > αkvk2V

∀ v ∈ V.

k · kV is the norm we will use to estimate the error. We now have all the ingredients we need to estimate the error!

4.2

Error estimates

There are two kinds of error estimate and they are both essential! 1. a priori:

e = e(u)

2. a posteriori: e = e(uh ) 4.2.1

A priori error estimate in energy norm

Assumep that a(·, ·) is a symmetric and coercive bilinear form. Then a(·, ·) is an inner product and kvk E = a(v, v) is a norm which we will call the energy norm. Let us look at the error in the energy norm. Let v ∈ Vh , then

kek2E = a(e, e) = a(e, u − uh ) = a(e, u − v + v − uh ) = a(e, u − v) + a(e, v − uh ) | {z }

(4.7) (4.8) (4.9)

∈Vh

= a(e, u − v) + 0 (from Galerkin Orthogonality) q q 6 a(e, e) a(u − v, u − v)

(4.10)

= kek E ku − vk E .

(4.12)

(4.11)

We have used Cauchy–Schwartz inequality ones. Now we divide both sides by kek E and obtain

ku − uh k E 6 ku − vk E

∀ v ∈ Vh .

(4.13)

Thus, the FEM solution is the optimal solution in the energy Norm! We combine this with the interpolation estimate (4.5), by setting v = πh u:

ku − uh k E 6 ku − πh uk E 6 C ( p, q)khq+1− p D q+1 uk. For example in the Poisson problem with piecewise linear functions (q = 1), we have

kvk E =

rZ Ω

|∇v|2 dx.

(4.14) (4.15)

Chapter 4. Finite element error estimate

30 The a prioriestimate (4.15) becomes

kek E 6 C khD2 uk.

(4.16)

A priori error estimate in the V–norm that does not assume symmetry. From coersivity we get 1 a(e, e) α 1 = a(e, u − v + v − uh ) α 1 = a(e, u − v) (from Galerkin Orthogonality) α C 6 k e kV k u − v kV . α

kek2V 6

(4.17) (4.18) (4.19) (4.20)

Here we assumed boundedness of a. By dividing both sides by kekV , we get an inequality known as Cea’s lemma. C kekV 6 ku − vkV ∀ v ∈ Vh (4.21) α As before, we can use an interpolation estimate to obtain

k e kV 6

4.2.2

C · C (q, p) q+1− p q+1 kh D uk . α

(4.22)

A posteriori error estimate for the Poisson problem in the energy norm

We will now derive an a posteriorierror estimate for the Poisson problem. To do this we need the following interpolation estimates:

ke − πh ek T 6 C h T k∇ek T , p ke − πh ek∂T 6 C h T k∇ekωT ,

(4.23) (4.24)

where ωT is the patch of of elements surrounding T. Note that the constant C will change throughout the derivation. We will also need Cauchy’s inequality, ab 6 δ a2 +

b2 , 4δ

a, b, δ > 0.

Recall that the energy–norm for the Poisson problem is

kvk E =

rZ Ω

|∇v|2 dx.

(4.25)

Chapter 4. Finite element error estimate

31

+

Figure 4.1: Illustration of a "jump" at two neighboring facets.

Let us begin the derivation,

kek2E = a(e, e) = a(e, e) − a (e, πh e) | {z }

(4.26)

= a(e, e − πh e)

(4.28)

(4.27)

=0

= =

Z Ω

∇e · ∇(e − πh e) dx

Z

Z

Z

Z

Z

T

T ∈Th

=

T

T ∈Th

=

T ∈Th

=

T ∈Th

∇e · ∇(e − πh e) dx −∆e(e − πh e) dx +

(4.30) Z

∂n e(e − πh e)dS

(−∆u +∆uh )(e − πh e) dx + T | {z }

Z

( f + ∆uh )(e − πh e) dx + ∑ {z } T| S

Z

≡R

T

(4.31)

∂T

= f

T ∈Th

=

(4.29)

R(e − πh e) dx −

1 2

Z

∂n e(e − πh e)dS

(4.32)

(∂n+ e + ∂n− e)(e − πh e)dS | {z }

(4.33)

∂T

∂S

−[∂n uh ]

[∂n uh ] (e − πh e)dS.

(4.34)

∂T

Let us explain a bit before we continue. In equation (4.27) we added the term a(e, πh e), which from Galerkin orthogonality is zero (since πh e ∈ Vh ). We used integration by part to get equation (4.31). In the first term one the right-hand side of equation (4.33), we insert the residual, R ≡ f + ∆uh . For the second term, we look at surface integral over two neighboring facets (S), for all S, see figure 4.1. There normal components, n, will be pointing in opposite direction of each other and we get, ∂n+ e + ∂n− e = n+ · ∇+ e + n− · ∇− e = n+ · (∇+ e − ∇− e) = [∂n e] = − [∂n uh ] .

(4.35)

Chapter 4. Finite element error estimate

32

[∂n uh ] is called a jump. Note that until now, we have only used equalities. Let us look at equation (4.34) in two terms. A≡

Z

R(e − πh e) dx

(4.36)

6 k RkT ke − πh ekT 6 k Rk T C h T k∇ek T

(4.37)

6

T

C h2T

1 k Rk2T + k∇ek2T 2 2

(4.38) (4.39)

and 1 B≡ [∂n uh ] (e − πh e)dS 2 ∂T 1 6 k [∂n uh ] k∂T k(e − πh e)k∂T 2 √ C hT 6 k [∂n uh ] k∂T k∇ekωT 2 C hT 6 k [∂n uh ] k2∂T + ek∇ek2ωT e Z

(4.40) (4.41) (4.42) (4.43)

In equation (4.37) and (4.41), we used Cauchy–Schwarz inequality. For equation (4.38) and (4.42), we used the interpolation estimates (4.23) and (4.24) respectively. Finally we used Cauchy’s inequality with δ = 12 for equation (4.39) and δ = 4e for equation (4.43). Let us sum up what we have so far:

kek2E =

A−B

(4.44)

A+B

(4.45)

C h2T C hT 1 k∇ek2T + ek∇ek2ωT + k Rk2T + k [∂n uh ] k2∂T . 2 2 e

(4.46)

T ∈Th

6

T ∈Th

6

T ∈Th

(4.47) Now we note that

k∇ek2T = kek2E

and

T ∈Th

T ∈Th

k∇ek2ωT 6 N kek2E ,

(4.48)

where N is the maximum number of surrounding elements. We use this and get

kek2E

6

C h2T 1 C hT + eN kek2E + ∑ k Rk2T + k [∂n uh ] k2∂T 2 2 e T ∈T

(4.49)

h

C h2T 1 C hT − eN kek2E 6 ∑ k Rk2T + k [∂n uh ] k2∂T . 2 2 e T ∈T h

(4.50)

Chapter 4. Finite element error estimate

33

Finally we chose e such that ( 12 − eN ) > 0 and we get the a posteriorierror estimate:

kek E 6 C

!1

2

h2T k Rk2T

+ h T k [∂n uh ] k2∂T

≡E

(4.51)

T

4.3

Adaptivity

In many applications we need the error to be less then a given tolerance (TOL). The error will typically be large at some parts of the domain and small at other parts of the domain. We do not want to refine1 all the elements in T , since this will require a lot more computational power and memory. Instead we want to only refine the elements where the error is big. Let us first rewrite the a posteriorierror estimate (4.51) in a more general form,

kek E 6 C

!1 2

γ2T

≡ E.

(4.52)

T

We consider two alternatives, 1. Given TOL > 0, choose T such that the computational norm is minimized and kekV 6 TOL. 2. Given TOL > 0, choose T such that |T | is minimized and E 6 TOL. Both methods are difficult to solve. Here is an algorithm for adaptivity. • Choose T • Compute uh on T • Compute E for uh • While E > TOL: i Refine all cells where γT is large ii Compute uh on T iii Compute E for uh Exercise 4.1. Let {φi }im=0 be the standard nodal basis functions for continuous piecewise linear approximation on Ω = (0, 1) with constant mesh size h = 1/m. (a) Take m = 10. Draw a picture of the basis functions φ0 , φ5 and φ10 . 0 . (b) Draw a similar picture of the derivatives φ00 , φ50 and φ10

Exercise 4.2. Consider the equation  00   −u + u = f in (0, 1), u(0) = 0,   u(1) = 0. 1 By

refining we mean that the elements T are made smaller

(4.53)

Chapter 4. Finite element error estimate

34

(a) Write down a finite element method for this equation using standard continuous piecewise linear polynomials. Show that the degrees of freedom U for the solution u = ∑im=−11 Ui φi may be obtained by solving the linear system ( A + M )U = b. The matrix A is often called the stiffness matrix and M is called the mass matrix. (b) Compute the 9 × 9 matrices A and M for m = 10. Demonstrate that if f ∈ Vh , then the mass matrix M may be used to compute the right-hand side vector b (the load vector) for the finite element discretization of (4.53). Exercise 4.3. Consider the following partial differential equation:  00   −u = f in (0, 1), u0 (0) = 0,   0 u (1) = 0.

(4.54)

(a) Explain why there is something wrong with this equation (why it is not well-posed). Consider both uniqueness and existence of solutions. (b) If you would implement a (standard) finite element method for this equation, what would happen? How would you notice that something was wrong? Exercise 4.4. Consider the following partial differential equation: ( −∇ · ( a∇u) = f in Ω, u = 0 on ∂Ω,

(4.55)

where a = a( x ) is a positive definite n × n matrix at each x ∈ Ω. Prove that the stiffness matrix A (for a suitable finite element space on Ω) is also positive definite, and explain why A is only positive semidefinite for homogeneous Neumann conditions. Implement a simple Python program that computes the stiffness and mass matrices on Ω = (0, 1) for any given m ≥ 2, where m is the number of intervals partitioning (0, 1). Use A and M to solve equation (4.53) for f ( x ) = sin(5πx ). Plot the solution and compare with the analytical solution. Demonstrate that the approximate solution converges to the exact solution when the mesh is refined. What is the convergence rate in L2 ? What is the convergence rate in H 1 ? Hint: Use numpy.array for storing matrices and vectors, numpy.linalg.solve R to solve the linear R system and pylab.plot to plot the solution. Also note that you may approximate bi = Ω φi f dx by f ( xi ) Ω φi dx. Implement a simple Python program that computes the stiffness matrix A on a uniform triangular mesh of the unit square Ω = (0, 1) × (0, 1). Use A to solve Poisson’s equation −∆u = f for f = 2π 2 sin(πx ) sin(πy) and homogeneous Dirichlet conditions. Plot the solution and compare with the analytical solution. Demonstrate that the approximate solution converges to the exact solution when mesh is refined. What is the convergence rate in L2 ? What is the convergence rate in H 1 ? Exercise 4.5. Estimate the H k Sobolev norm of u = sin(kπx ) as a function of k . Exercise 4.6. Solve the problem −∆u = f with homogenous boundary conditions on the unit interval for the case where the analytical solution is u = sin(kπx ) and f is given as −∆u. As we learned in this chapter,

ku − uh k1 ≤ Ch p kuk p+1 .

Chapter 4. Finite element error estimate

35

Estimate C in numerical estimates for k = 1, 10, 100 on meshes with 100, 1000, and 10000 elements and validate the error estimate. Remark: Use errornorms in FEniCS and represent the analytical solution in a higher order space in order to avoid super convergence. Exercise 4.7. Consider the error of the problem in Exercise 4.6 in L2 , L∞ , and L1 norms. Is the order of the approximation the same? Hint: use the least square method to estimate Cx and α x in

k u − u h k x ≤ Cx h α x , where x denotes the norm and Cx depends on u in contrast to Exercise 4.6. Hence, it is advisable to determine Cx and α x for a given k and then change k. Exercise 4.8. Consider the error of the problem in Exercise 4.6 and 4.7 in L2 and H 1 norms, but determine the rate of convergence numerically with respect to the polynomial order of the finite element method. That is, use the least square method to estimate C p and α p in

k u − u h k1 ≤ C p h α p . Here, C p depends on u in contrast to Exercise 4.6. Hence, it is advisable to determine C p and α p for a given k and then change k. Exercise 4.9. Consider the same problem as in 4.6 in 3D (or 2D if your computer does not allow a proper investigation in 3D). Assume that you tolerate a H 1 error of 1.0e − 2. What polynomial order of the Lagrange finite element gives you the answer in minimal amount of computational time? Re-do the experiments with tolerance 1.0e − 4 and 1.0e − 6

5

Finite element function spaces

By Anders Logg, Kent–Andre Mardal

Finite element function spaces (vh ) are constructed by patching together local function spaces, V = V ( T ), defines on finite subsets of the domain Ω. Example: Piecewise linear in 1–D Figure 5.1 shows a function uh ∈ Vh . This is a linear combination of basis function for first order Lagrange elements in 1–D. Figure 5.2 show the (global) basis functions of this function space and 0 figure 5.3 show the local basis function on an element T and T . Example: Piecewise linear in 2–D Figure 5.4 shows a linear combination of piecewise linear basis functions forming a function uh , on a triangle. The different color indicate where the different baisis functions contribute. Figure 5.5 shows a (global) basis funcitons and figure 5.6 show the local basis function on an element T and T 0 .

5.1

The finite element definition

General idea: Define a function space on each local subdomain and patch together the local function space, to create a global function space with the desired continuity. An definition of the finite element was given by Ciarlet in 1975. This serves as our formal definition. Definition 5.1. Finite element (Ciarlet 1975) A finite element is a triple ( T, V , L), where i The domain T is a bounded, closed subset of Rd (for d = 1, 2, 3, . . . ) with nonempty interior and piecewise smooth boundary; ii The space V = V ( T ) is a finite dimensional function space on T of dimension n; iii The set of degrees of freedom (nodes) L = {`1 , `2 , . . . , `n } is a basis for the dual space V 0 ; that is, the space of bounded linear functionals on V . Example: (T) Figure 5.7 shows different kinds of domains, T, for different dimensions, d = 1, 2, 3. Example: (V ) • V = Pq ( T ) = {polynomials on T of degree 6 q} • V = [Pq ( T )]d 37

38

Chapter 5. Finite element function spaces

Figure 5.1: Function that is composted of a linear combinatin of basis functions

Figure 5.2: Basis functions (global)

Chapter 5. Finite element function spaces

Figure 5.3: Local basis functions

Figure 5.4: Function on a triangle that is composted of a linear combinatin of basis functions. The left figure shows a side view while the right figure shows a view from above.

Figure 5.5: Basis functions (global) 2–D

39

40

Chapter 5. Finite element function spaces

Figure 5.6: Local basis functions 2–D

Figure 5.7: Illustration of different domains, T.

Chapter 5. Finite element function spaces

41

• V = subspace of Pq ( T ) Example: (L) • L(v) = v( x¯ ) R • L(v) = T v( x ) dx R • L(v) = S v · n dS Is the standard piecewise linear element, P1 , a finite element? i T is a interval, triangle or tetrahedron: ok ii V = {v : v( x ) = a + bx } ≡ P1 ( T ) dim V = n = d + 1 : ok iii L = {l1 , . . . , ln }, where li (v) = v( xi ) for i = 1, . . . , n. Is this a basis? To be able to show that L is a basis, we need the following lemma. Lemma 5.1. Unisolvence 0 L is a basis for the dual space V , if and only if Lv = 0 implies v = 0. This can be expressed as: 0

L is basis for V ⇔ (Lv = 0 ⇒ v = 0) 0

Proof. Let {φi }in=1 be a basis for V , take ` ∈ V and take v = ∑nj=1 β j φj ∈ V. First we look at the left-hand side; 0

L is basis for V ⇔ ∃! α ∈ Rn : ` =

n

∑ αj `j

j =1

⇔ ∃! α ∈ Rn : `φi = |{z} = bi

n

` j φi ∑ α j |{z}

j =1

for i = 1, . . . , n

= Aij

n

⇔ ∃! α ∈ R : Aα = b ⇔ A is invertible Now we look at the right-hand side;

(Lv = 0 ⇒ v = 0) ⇔ `i ∑ β j φj = 0 for i = 1, . . . , n ⇒ β = 0 j

`i φ j ∑ β j |{z} j

for i = 1, . . . , n ⇒ β = 0

= A ji

T

⇔ A β=0 ⇒β=0 ⇔ A T is invertible ⇔ A is invertible

To sum up:

L is basis for V ⇔ A is invertible ⇔ (Lv = 0 ⇒ v = 0).

Chapter 5. Finite element function spaces

42

We can now check if P1 is a finite element. Take v on a triangle, set v = 0 at each corner. This leads to v = 0 for linear functions. P1 is a finite element. Definition 5.2. Nodal basis The nodal basis {φi }in=1 for a finite element (T, V , L) is the unique basis satisfying

`i (φ J ) = δij . A nodal basis has the desired property that if, uh = ∑nj=1 u j φj , then `i (uh ) = ui . Example: We look at P1 elements on triangle with corners at x1 , x2 and x3 , x1 = (0, 0), x2 = (1, 0), x3 = (0, 1),

`1 v = v( x1 ) φ1 ( x ) = 1 − x1 − x2 `2 v = v( x2 ) φ2 ( x ) = x1 `3 v = v( x3 ) φ3 ( x ) = x2 .

from this we see that φ1 , φ1 and φ1 are a nodal basis. Computing the nodal basis: Let {ψi }in=1 be any basis for P and let {ψi }in=1 be its nodal basis. Then, n

∑ α jk ψk = φi

i =1 n

`i ( ∑ α jk ψk ) = δij i =1

`i (ψk )α jk = δij | {z } Aij

Aα T = I

A is the generalized Vandermonde matrix. Solving for α gives the nodal basis!

5.1.1

Conforming

We will introduce some important function spaces: H 1 (Ω) = {v ∈ L2 (Ω) : ∇v ∈ L2 (Ω)}

(5.1)

H (div; Ω) = {v ∈ L2 (Ω) : ∇ · v ∈ L2 (Ω)}

(5.2)

H (curl; Ω) = {v ∈ L2 (Ω) : ∇ × v ∈ L2 (Ω)}

(5.3)

Chapter 5. Finite element function spaces

43

Note: H 1 (Ω) ⊂ H (div; Ω) ≈ {v : normal component ∈ C0 } H 1 (Ω) ⊂ H (curl; Ω) ≈ {v : tangential component ∈ C0 } If a finite element function space is a subspace of function space V, we call it V-conforming. Example, the Lagrange elements are H 1 -conforming, CGq (T ) ⊂ H 1 (Ω).

5.2

Common elements

Let us have a look at some common elements. First we will look at the most common group of elements, the continues Lagrange elements. These are also know as, continues Galerkin elements or Pq elements. Definition 5.3 (Lagrange element). The Lagrange element (CGq ) is defined for q = 1, 2, . . . by T ∈ {interval, triangle, tetrahedron},

(5.4)

V = P q ( T ),

(5.5)

` i ( v ) = v ( x i ),

i = 1, . . . , n(q),

(5.6)

n(q)

where { xi }i=1 is an enumeration of points in T defined by  0 6 i 6 q,  i/q, x= (i/q, j/q), 0 6 i + j 6 q,  (i/q, j/q, k/q), 0 6 i + j + k 6 q,

T interval, T triangle, T tetrahedron.

(5.7)

The dimension of the Lagrange finite element thus corresponds to the dimension of the complete polynomials of degree q on T and is   q + 1, 1 n(q) = (q + 1)(q + 2),  12 6 ( q + 1)( q + 2)( q + 3),

T interval, T triangle, T tetrahedron.

(5.8)

Figure 5.8 show the Lagrange elements for different dimensions and how the nodal points are placed. Now we will look at some H (div)-conforming elements. First up is the Raviart–Thomas RTq elements. Definition 5.4 (Raviart–Thomas element). The Raviart–Thomas element (RTq ) is defined for q = 1, 2, . . . by T ∈ {triangle, tetrahedron},

(5.9)

V = [Pq−1 ( T )]d + x Pq−1 ( T ), ( R v · n p ds, for a set of basis functions p ∈ Pq−1 ( f ) for each facet f, Rf L= for a set of basis functions p ∈ [Pq−2 ( T )]d for q > 2. T v · p dx,

(5.10) (5.11)

The dimension of RTq is n(q) =

q ( q + 2), 1 2 q ( q + 1)( q + 3),

T triangle, T tetrahedron.

(5.12)

Chapter 5. Finite element function spaces

44

q=1

q=2

q=3

n=2

n=3

n=4

d=1

d=2

n=3

n=6

n=10

d=3

n=4

n=10

n=20

Figure 5.8: The Lagrange (CGq ) elementes. q is the order of the elements, d is the dimention and n is the number of degrees of freedom.

Figure 5.9: Illustration of the degrees of freedom for the first, second and third degree Raviart–Thomas elements on triangles and tetrahedra. The degrees of freedom are moments of the normal component against Pq−1 on facets (edges and faces, respectively) and, for the higher degree elements, interior moments against [Pq−2 ]d .

Chapter 5. Finite element function spaces

45

Figure 5.10: Illustration of the first, second and third degree Brezzi– Douglas–Marini elements on triangles and tetrahedra. The degrees of freedom are moments of the normal component against Pq on facets (edges and faces, respectively) and, for the higher degree elements, interior moments against NED1q−1 .

Figure 5.9 shows some Raviart–Thomas elements. Next element is the Brezzi–Douglas–Marini BDMq elements. These are also H (div)-conforming elements. Definition 5.5 (Brezzi–Douglas–Marini element). The Brezzi–Douglas–Marini element (BDMq ) is defined for q = 1, 2, . . . by T ∈ {triangle, tetrahedron}, d

V = [Pq ( T )] , ( R v · np ds, Rf L= T v · p dx,

(5.13) (5.14)

for a set of basis functions p ∈ Pq ( f ) for each facet f, for a set of basis functions p ∈ NED1q−1 ( T ) for q > 2.

(5.15)

where NED1 refers to the Nédélec H (curl) elements of the first kind. The dimension of BDMq is n(q) =

(q + 1)(q + 2), T triangle, 1 ( q + 1 )( q + 2 )( q + 3 ) , T tetrahedron. 2

(5.16)

Figure 5.10 shows the Brezzi–Douglas–Marini elements. The last elements we will look at, are the Nédélec NED2q elements of second kind. These are H (curl)-conforming elements. Definition 5.6 (Nédélec element of the second kind). The Nédélec element of the second kind (NED2q ) is defined for q = 1, 2, . . . in two dimensions by T = triangle,

(5.17)

V = [Pq ( T )]2 , R Re v · t p ds, L= T v · p dx,

(5.18) for a set of basis functions p ∈ Pq (e) for each edge e, for a set of basis functions p ∈ RTq−1 ( T ), for q > 2.

(5.19)

Chapter 5. Finite element function spaces

46

Figure 5.11: Illustration of first and second degree Nédélec H (curl) elements of the second kind on triangles and first degree on tetrahedron. Note that these elements may be viewed as rotated Brezzi–Douglas–Marini elements.

where t is the edge tangent, and in three dimensions by T = tetrahedron,

(5.20)

3

V = [Pq ( T )] ,  R  Re v · t p dl, v · p ds, L=  Rf T v · p dx,

(5.21) for a set of basis functions p ∈ Pq (e) for each edge e, for a set of basis functions p ∈ RTq−1 ( f ) for each face f , for q > 2 for a set of basis functions p ∈ RTq−2 ( T ), for q > 3.

(5.22)

The dimension of NED2 q is n(q) =

(q + 1)(q + 2), T triangle, 1 2 ( q + 1)( q + 2)( q + 3), T tetrahedron.

Figure 5.11 shows the Nédélec element for second kind.

(5.23)

6

Discretization of a convection-diffusion problem

By Anders Logg, Kent–Andre Mardal

6.1

Introduction

This chapter concerns convection-diffusion equations of the form:

−µ∆u + v · ∇u = u =

f

in Ω

g

on ∂Ω

Here v is typically a velocity, µ is the diffusivity, and u is the unknown variable of interest. We assume the Dirichlet condition u = g on the boundary, while f is a source term. The problem is a singular perturbation problem. That is, the problem is well-posed for µ > 0 but becomes over–determined as µ tends to zero. For µ = 0 the Dirichlet conditions should only be set on the inflow domain Γ; that is, where n · v < 0 for the outward unit normal n. For many practical situations µ > 0, but small in the sense that µ |v|. For such problems, the solution will often be similar to the solution of the reduced problem with µ = 0 except close to the non-inflow boundary ∂Ω\Γ. Here, there will typically be a boundary layer exp (kvk∞ x/µ). Furthermore, discretizations often shows unphysical oscillations starting at this boundary layer. The next example shows a 1D convection diffusion problem resulting in non-physical oscillations due to the use of a standard Galerkin approximation. Example 6.1. Standard Galerkin approximation Consider the following 1D problem convection diffusion problem:

−u x − µu xx = 0, u(0) = 0, u(1) = 1.

(6.1) (6.2)

The analytical solution is: u( x ) =

e− x/µ − 1 . e−1/µ − 1

Hence, for µ → 0 , both e− x/µ and e−1/µ will be small and u( x ) ≈ 1 unless x ≈ 0. However, close to the outflow boundary at x = 0, there will be a boundary layer where u has exponential growth. We solve the problem with a standard Galerkin method using linear first order Lagrange elements. To be specific, the variational problem is: Find u ∈ H(10,1) such that Z 1 0

−u x v + µu x v x dx = 0, 47

∀ v ∈ H(10,0) .

Chapter 6. Discretization of a convection-diffusion problem

48

Figure 6.1: Solution of the convection diffusion problem obtained with 10 and 100 elements. The left figure obtained on a mesh with 10 elements shows wild oscillations, while the mesh with 100 elements demonstrate a nicely converged solution.

Here, H(10,1) contains functions u ∈ H 1 with u = 0 at x = 0 and u = 1 and x = 1, while H(10,0) contains functions that are zero both at x = 0 and x = 1. We consider a µ = 0.01, a relatively large µ, to enable us to see the differences on a relatively coarse mesh. Both the numerical and analytical solutions are shown in Figure 6.1. Clearly, the numerical solution is polluted by non-physical oscillations on the coarse mesh with 10 elements, while a good approximation is obtained for 100 elements. Finally, we show the complete code for this example: Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

from dolfin import * for N in [10, 100]: mesh = UnitInterval(N) V = FunctionSpace(mesh, "CG", 1) u = TrialFunction(V) v = TestFunction(V) mu_value = 1.0e-2 mu = Constant(mu_value) f = Constant(0) h = mesh.hmin() a = (-u.dx(0)*v + mu*u.dx(0)*v.dx(0))*dx L = f*v*dx

Chapter 6. Discretization of a convection-diffusion problem 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

49

u_analytical = Expression("(exp(-x[0]/e) - 1)/ (exp(-1/%e) - 1)" % (mu_value, mu_value)) def boundary(x): return x[0] < DOLFIN_EPS or x[0] > 1.0 - DOLFIN_EPS bc = DirichletBC(V, u_analytical, boundary) U = Function(V) solve(a == L, U, bc) U_analytical = project(u_analytical, V) import pylab pylab.plot(U.vector().array()) pylab.plot(U_analytical.vector().array()) pylab.legend(["Numerical Solution", "Analytical Solution"]) pylab.show()

To understand Example 6.1 we first remark that the discretization corresponds to the following central finite difference scheme:

µ v [ u − u i −1 ] [ui+1 − 2ui + ui−1 ] − 2 2h i+1 h u0 = 0, u N = 1

= 0,

i = 1, . . . , N − 1

Clearly, if µ = 0 then the scheme reduces to v [ u − u i −1 ] 2h i+1 u0 = 0, u N = 1

= 0,

i = 1, . . . , N − 1

Here, it is clear that ui+1 is coupled to ui−1 , but not to ui . Hence, this scheme allow for an alternating sequence of ui+1 = ui−1 = . . ., while ui = ui−2 = . . . resulting in oscillations. One cure for these oscillations is upwinding. That is, instead of using a central difference scheme, we employ the following difference scheme: du 1 ( x ) = [ u i +1 − u i ] dx i h du 1 ( x i ) = [ u i − u i −1 ] dx h

if v < 0, if v > 0.

Using this scheme, oscillations will disappear. The approximation will however only be first order. There is a relationship between upwinding and artificial diffusion. If we discretize u x with a central difference and add diffusion as e = h/2∆ we get u i +1 − u i −1 2h h −ui+1 + 2ui − ui−1 + 2 h2 u − u i −1 = i h

central scheme, first order derivative central scheme, second order derivate upwind scheme

Hence, upwinding is equivalent to adding artificial diffusion with e = h/2; that is, in both cases we

Chapter 6. Discretization of a convection-diffusion problem

50 actually solve the problem

−(µ + e)u xx + vu x = f . using a central difference scheme. Finite difference upwinding is difficult to express using finite elements methods, but it is closely to adding some kind of diffusion to the scheme. The next example shows the solution of the problem in Example 6.1 with artificial diffusion added. Example 6.2. Stabilization using artificial diffusion Consider again the following 1D problem convection diffusion problem:

−u x − µu xx = 0, u(0) = 0, u(1) = 1.

(6.3) (6.4)

We solve the problem with a standard Galerkin method using linear first order Lagrange elements as before, but we add artificial diffusion. To be specific, the variational problem is: Find u ∈ H(10,1) such that Z 1 0

−u x v + (µ + βh)u x v x = 0,

∀ v ∈ H(10,0) ,

where β = 0.5 corresponds to the finite difference scheme with artificial diffusion mentioned above. Below is the code for the changed variational form: Python code 1 2 3 4 5

beta_value = 0.5 beta = Constant(beta_value) f = Constant(0) h = mesh.hmin() a = (-u.dx(0)*v + mu*u.dx(0)*v.dx(0) + beta*h*u.dx(0)*v.dx(0))*dx

Figure 6.2 shows the solution for 10 and 100 elements when using artificial diffusion stabilization. Clearly, the solution for the coarse grid has improved dramatically since the oscillations have vanished and the solution appear smooth. It is, however, interesting to note that the solution for the fine mesh is actually less accurate than the solution in Fig 6.2 for the corresponding fine mesh. The reason is that the scheme is now first order, while the scheme in Example 6.1 is second order.

6.2

Streamline diffusion/Petrov-Galerkin methods

In the previous section we saw that artificial diffusion may be added to convection diffusion dominated problems to avoid oscillations. The diffusion was, however, added in a rather ad-hoc manner. Here, we will see how diffusion may be added in a consistent way; that is, without changing the solution as h → 0. This leads us to streamline diffusion using the Petrov-Galerkin method. Our problem reads: Find u such that

−µ∆u + v · ∇u = u =

f

in Ω,

g

on ∂Ω.

The weak formulation reads: Find u ∈ Hg1 such that a(u, w) = b(w)

∀ w ∈ H01 ,

Chapter 6. Discretization of a convection-diffusion problem

51

Figure 6.2: Solution of the convection diffusion problem obtained with 10 and 100 elements using artificial diffusion to stabilize.

where a(u, w)

=

b(w)

=

Z ZΩ Ω

µ∇u · ∇w dx +

Z Ω

v · ∇uw dx,

f w dx.

Here, Hg1 is the subspace of H 1 where the trace equals g on the boundary ∂Ω. The standard Galerkin discretization is: Find uh ∈ Vh,g such that a(uh , vh ) = ( f , vh ) ∀ vh ∈ Vh,0 .

(6.5)

Here, Vh,g and Vh,0 are the subspaces with traces that equals g and 0 on the boundary, respectively. Adding artificial diffusion to the standard Galerkin discretization, as was done in Example 6.2, can be done as: Find uh ∈ Vh,g such that h a(uh , vh ) + (∇uh , ∇vh ) = ( f , vh ) ∀ vh ∈ Vh,0 . 2 Let τ (u, vh ) = a(uh , vh ) − ( f , vh ).

Chapter 6. Discretization of a convection-diffusion problem

52

Then the truncation error is first order in h; that is, τ (u, vh ) ∼ O(h). v∈Vh ,v6=0 k v kV

τ (u) =

sup

Hence, the scheme is consistent in the sense that lim τ (u) → 0.

h →0

However, it is not strongly consistent in the sense that τ (u) = 0 for every discretization, which is what is obtained with the Galerkin method due to Galerkin-orthogonality: τ (u, vh ) = a(uh , vh ) − ( f , vh ) = a(uh − h, vh ) = 0

∀ vh ∈ Vh .

The Streamline diffusion/Petrov-Galerkin method introduces a strongly consistent diffusion by employing alternative test functions. Let us therefore assume that we have a space of test functions Wh . Abstractly, the Petrov-Galerkin method appears very similar to the Galerkin method, that is: Find uh ∈ Vh,g such that a(uh , vh ) = ( f , vh ) ∀ vh ∈ Wh,0 . Again, Vh,g and Wh,0 are the subspaces with traces that equals g and 0 on the boundary, respectively. Notice that the only difference from the standard Galerkin formulation is that test and trial functions differ. On matrix form, the standard Galerkin formulation reads: Aij = a( Ni , Nj ) =

Z Ω

Z

µ∇ Ni · ∇ Nj dx +

v · ∇ Ni Nj dx,

(6.6)

while for the Petrov Galerkin method, we use the test functions L j : Aij = a( Ni , L j ) =

Z Ω

µ∇ Ni · ∇ L j dx +

Z Ω

v · ∇ Ni L j dx

A clever choice of L j will enable us to add diffusion in a consistent way. To make sure that the matrix is still quadratic, we should however make sure that the dimension of Vh and Wh are equal. Let L j be defined as L j = Nj + βh v · ∇ Nj . Writing out the matrix Aij in (6.6) now gives Aij

= a( Ni , Nj + βh v · ∇ Nj ) = =

Z ZΩ

|Ω

µ∇ Ni · ∇( Nj + βh v · ∇ Nj ) dx + µ∇ Ni · ∇ Nj dx + {z

+ βh |

Z Ω

Z Ω

v · ∇ Ni · ( Nj + βh v · ∇ Nj ) dx

v · ∇ Ni Nj dx }

standard Galerkin

Z

µ∇ Ni · ∇(v · ∇ Nj ) dx + βh Ω {z } |

=0 third order term, for linear elements

Z Ω

(v · ∇ Ni )(v · ∇ Nj ) dx {z }

Artificial diffusion in v direction

Notice that also the righthand side changes b( L j ) =

Z Ω

f L j dx =

Z Ω

f ( Nj + βh v · ∇ Nj ) dx

Chapter 6. Discretization of a convection-diffusion problem

53

Thus, both the matrix and the righthand side are changed such that artificial diffusion is added in a consistent way. We summarize this derivation by stating the SUPG problem. Find uh,sd ∈ Hg1 such that asd (u, w) = bsd (w)

∀ w ∈ H01 ,

(6.7)

where asd (u, w)

= +

bsd (w)

6.3

=

Z Ω

µ∇u · ∇w dx + Z

βh Z Ω

Z Ω

v · ∇uw dx

(v · ∇u)(v · ∇w) dx + βh µ ∑ e

f w dx + βh

Z Ω

Z Ωe

−∆u(v · ∇w) dx,

f v · ∇w dx.

Well posedness of the continuous problem

Before we discuss error estimates of the discrete problem, we briefly describe the properties of the continuous problem. Theorem 6.1. Lax-Milgram theorem Let V be a Hilbert space, a(·, ·) be a bilinear form, L(·) a linear form, and let the following three conditions be satisfied: 1. a(u, u) ≥ αkuk2V ,

∀ u ∈ V,

2. a(u, v) ≤ C kukV kvkV , 3. L(v) ≤ D kvkV ,

∀ u, v ∈ V,

∀v ∈ V .

Then the problem: Find u ∈ V such that a(u, v) = L(v)

∀ v ∈ V.

is well-posed in the sense that there exists a unique solution with the following stability condition

k u kV ≤

C k L kV ∗ . α

Condition (1) is often refereed to as coersivity or positivity, while (2) is called continuity or boundedness. Condition 3 simply states that the right-hand side should be in the dual space of V. In the following we will use Lax-Milgram’s theorem to show that the convection-diffusion problem is well-posed. The Lax-Milgram’s theorem is well-suited since it does not require symmetry of the bilinear form. We will only consider the homogeneous Dirichlet conditions in the current argument1 . From Poincare’s lemma we know that kuk0 ≤ CΩ |u|1 . Using Poincare, it is straightforward to show that the semi-norm Z

|u|1 = ( (∇u)2 dx )1/2 1 Has the argument for reducing non-homogeneous Dirichlet conditions to homogeneous Dirichlet conditions been demonstrated elsewhere?

Chapter 6. Discretization of a convection-diffusion problem

54 and the standard H 1 norm

Z

kuk = ( (∇u)2 + u2 dx )1/2 are equivalent. Hence, on H01 the | · |1 is a norm equivalent the H 1 -norm. Furthermore, this norm will be easier to use for our purposes. For the convection-diffusion problem, we will consider two cases 1) incompressible flow, where ∇ · v = 0 and 2) compressible flow, where ∇ · v 6= 0. Let us for the begin with the incompressible case. Further, let b(u, w)

=

cv (u, w)

=

a(u, w)

Z ZΩ Ω

µ∇u∇w dx v · ∇u w dx

= a(u, w) + b(u, w)

Furthermore, assuming for the moment that u ∈ Hg1 , w ∈ H01 , we have cv (u, w)

=

Z Ω

= −

v · ∇u w dx

Z Ω

v · ∇w u dx −

Z

∇ · v u w dx + |Ω {z }

=0 (incompressibility)

Z

uwv·n | Γ {z }

=0 (Dirichlet conditions)

= −cv (w, u). and therefore cv (·, ·) is skew-symmetric. Letting w = u we obtain that cv (u, u) = −cv (u, u), which means that cv (u, u) = 0. Therefore, the first condition in Lax-Milgram’s theorem (1) is satisfied: a(u, u) = b(u, u) ≥ µ|u|21 . The second condition, the boundedness of a (2), follows by applying Cauchy-Schwartz inequality if we assume bounded flow velocities kvk∞ . a(u, v)

=

Z Ω

µ∇u∇w dx +

Z Ω

v∇uw dx

≤ µ | u |1 | w |1 + k v k ∞ | u |1 k w k0 ≤ (µ + kvk∞ CΩ )|u|1 |v|1 . The third condition simply means that the right-hand side needs to be in the dual space of Hg1 . Hence, we obtain the following bounds by Lax-Milgram’s theorem:

| u |1 ≤

µ + CΩ kvk∞ k f k −1 . µ

Notice that for convection-dominated problems CΩ kvk∞ µ and the stability constant will therefore be large. In the case where ∇ · v 6= 0, we generally obtain that cv (u, u) 6= 0. To ensure that a(u, u) is still positive, we must then put some restrictions on the flow velocities. That is, we need

|cv (u, u)| ≤ a(u, u).

Chapter 6. Discretization of a convection-diffusion problem

55

If CΩ kvk∞ ≤ Dµ with D < 1 we obtain a(u, u)

=

Z Ω

µ∇u∇u dx +

Z Ω

v∇uu dx

≥ µ|||u||||||v||| − kvk∞ |||u|||kuk0 ≥ (µ − kvk∞ CΩ )|||u||||||u||| ≥ (µ(1 − D ))|||u|||2 . Further, the second condition of Lax-Milgram’s theorem still applies. However, that CΩ kvk∞ ≤ Dµ is clearly very restrictive compared to the incompressible case. We remark that the Lax-Milgram conditions in the presence of the SUPG clearly will not be satisified in the continuous case because of the third order term −∆u(v · ∇w). With this term, the second condition of Lax-Milgram is not satisified with C ≤ ∞. Finally, R in2 order to make the term cv (u, u) skew-symmetric, it was required that the boundary integral Γ u w · n was zero. This was a consequence of the Dirichlet conditions. In general, this R is neither needed nor possible at Neumann boundaries. As long as Γ u2 w · n ≥ 0, the above argumentation is valid. From a physical point of view this means that there is outflow at the Neumann boundary, i.e., that w · n ≥ 0.

6.4

Error estimates

Finally, we provide some error estimates for the Galerkin method and the SUPG method applied to the convection-diffusion equation. Central in the derivation of both results are the following interpolation result. Theorem 6.2. Approximation by interpolation There exists an interpolation operator Ih : H t+1 → Vh where Vh is a piecewise polynomial field of order t with the property that for any u ∈ H t (Ω)

ku − Ih ukm ≤ Bht+1−m kukt+1 . Proof. The bounds on the interpolation error is provided by the Bramble-Hilbert lemma for t ≥ 1 and Clement’s result (the case t = 1), cf. e.g. Braess [2007], Brenner and Scott [2008]. For the Galerkin method the general and elegant result of Cea’s lemma provide us with error estimates. Theorem 6.3. Cea’s lemma Suppose the conditions for Lax-Milgram’s theorem is satisfied and that we solve the linear problem (6.5) on a finite element space of order t. Then,

ku − uh kV = ku − uh k1 6 C1 Here C1 = theorem.

CB α ,

CB t h k u k t +1 . α

where B comes from the approximation property and α and C are the constants of Lax-Milgram’s

Proof. The proof is straightforward and follows from the Galerkin orthogonality: a(u − uh , v) = 0,

∀ v ∈ Vh

Chapter 6. Discretization of a convection-diffusion problem

56 Since Vh ⊂ V: α k u − u h kV

≤ a(u − uh , u − uh ) = a(u − uh , u − v) − a(u − uh , v − uh ) ≤ C k u − u h kV k u − v kV .

Since v − uh ∈ Vh . Furthermore, v is arbitrary and we may therefore choose v = Ih u and obtain:

|||u − uh ||| 6

C |||u − Ih u||| ≤ Cht−1 kukt , α

where t − 1 is the order of the polynomials of the finite elements. We remark, as mentioned above, that Cα is large for convection dominated problems and that this is what causes the poor approximation on the coarse grid, shown in Example 6.1. To obtain improved error estimates for the SUPG method, we introduce an alternative norm: 1/2 kuksd = hkv · ∇uk2 + µ|∇u|2

(6.8)

Theorem 6.4. Suppose the conditions for Lax-Milgram’s theorem is satisfied in the Hilbert space defined by the SUPG norm (6.8) and that we solve the SUPG problem (6.7) on a finite element space of order 1. Then,

ku − uh ksd 6 Ch3/2 kuk2 Proof. The proof can be found in e.g. Elman et al. [2014], Quarteroni and Valli [2008].

6.5

Exercises

Exercise 6.1. Show that the matrix obtained from a central difference scheme applied to the operator Lu = u x is skew-symmetric. Furthermore, show that the matrix obtained by linear continuous Lagrange elements are also skew-symmetric. Remark: The matrix is only skew-symmetric in the interior of the domain, not at the boundary. Exercise 6.2. Estimate numerically the constant in Cea’s lemma for various α and h for the Example 6.1. Exercise 6.3. Implement the problem u = sin(πx ), and f = −αu xx − u x and estimate numerically the constant in Cea’s lemma for various α. Compare with the corresponding constant estimated from Example 6.1. Exercise 6.4. Implement the problem u = sin(πx ), and f = −αu xx − u x using SUPG and estimate the constants in the error estimate obtained by both the ||| · ||| and the k · kv norms. Compare with the corresponding constant estimated from Example 6.1. Exercise 6.5. Investigate whether the coersivity condition holds when a homogeneous Neumann condition is assumed on the outflow. You may assume that v · n > 0. Exercise 6.6. Consider the eigenvalues of the operators, L1 , L2 , and L3 , where L1 u = u x , L2 u = −αu xx , α = 1.0e−5 , and L3 = L1 + L2 , with homogeneous Dirchlet conditions. For which of the operators are the eigenvalues positive and real? Repeat the exercise with L1 = xu x . Exercise 6.7. Compute the Soblev norms k · km of the function sin(kπx ) on the unit interval. Assume that the Soblev norm is kukm = (−∆m u, u)1/2 . What happens with negative m? You may use either Fourier transformation or compute (eigenvalues of) powers of the stiffness matrix. Exercise 6.8. Perform numerical experiments to determine the order of approximation with respect to various Soblev norms and polynomial orders for the function sin(kπx ) on the unit interval.

7

Stokes problem

By Anders Logg, Kent–Andre Mardal

7.1

Introduction

The Stokes problem describes the flow of a slowly moving viscous incompressible Newtonian fluid. Let the fluid domain be denoted Ω. We assume that Ω is a bounded domain in Rn with a smooth boundary. Furthermore, let u : Ω → Rn be the fluid velocity and p : Ω → R be the fluid pressure. The strong form of the Stokes problem can then be written as

−∆u + ∇ p ∇·u u ∂u − pn ∂n

= f , in Ω, = 0, in Ω, = g, on ∂Ω D ,

(7.1)

= h, on ∂Ω N .

(7.4)

(7.2) (7.3)

Here, f is the body force, ∂Ω D is the Dirichlet boundary, while ∂Ω N is the Neumann boundary. Furthermore, g is the prescribed fluid velocity on the Dirichlet boundary, and h is the surface force or stress on the Neumann boundary. These boundary condition leads to a well-posed problem provided that neither the Dirichlet nor Neumann boundaries are empty. In case of only Dirichlet conditions the pressure is only determined up to a constant, while only Neumann conditions leads to the velocity only being determined up to a constant. These equations are simplifications of the Navier–Stokes equations for very slowly moving flow. In contrast to elliptic equations, many discretizations of this problem will lead to instabilities. These instabilities are particularly visible as non-physical oscillations in the pressure. The following example illustrate such oscillations. Example 7.1. Poiseuille flow One of the most common examples of flow problems that can be solved analytically is Poiseuille flow. It describes flow in a straight channel (or cylinder in 3D). The analytical solution is u = (y (1 − y), 0) and p = 1 − x. Since the solution is know, this flow problem is particularly useful for verifying that the code or numerical method. We therefore begin by discretizing the problem in the simplest way possible; that is, linear continuous/Lagrange elements for both velocity and pressure. The results is shown Figure 7.1. Clearly, the velocity is approximated satisfactory, but the pressure oscillate widely and is nowhere near the actual solution. Python code 1 2 3

from dolfin import * def u_boundary(x):

57

58

Chapter 7. Stokes problem

Figure 7.1: Poiseuille flow solution obtained with linear continuous elements for both velocity and pressure. The left figure shows the (wellrepresented) velocity while the right shows the pressure (with the wild oscillations).

Chapter 7. Stokes problem 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44

59

return x[0] < DOLFIN_EPS or x[1] > 1.0 - DOLFIN_EPS or x[1] < DOLFIN_EPS def p_boundary(x): return x[0] > 1.0 - DOLFIN_EPS mesh = UnitSquare(40,40) V = VectorFunctionSpace(mesh, "Lagrange", 1) Q = FunctionSpace(mesh, "Lagrange", 1) #Q = FunctionSpace(mesh, "DG", 0) W = MixedFunctionSpace([V, Q]) u, p = TrialFunctions(W) v, q = TestFunctions(W) f = Constant([0,0]) u_analytical = Expression(["x[1]*(1-x[1])", "0.0"]) p_analytical = Expression("-2+2*x[0]") bc_u = DirichletBC(W.sub(0), u_analytical, u_boundary) bc = [bc_u] a = inner(grad(u), grad(v))*dx + div(u)*q*dx + div(v)*p*dx L = inner(f, v)*dx UP = Function(W) A, b = assemble_system(a, L, bc) solve(A, UP.vector(), b, "lu") U, P = UP.split() plot(U, title="Numerical velocity") plot(P, title="Numerical pressure") U_analytical = project(u_analytical, V) P_analytical = project(p_analytical, Q) plot(U_analytical, title="Analytical velocity") plot(P_analytical, title="Analytical pressure") interactive()

However, when using the second order continuous elements for the velocity and first order continuous elements for the pressure, we obtain the perfect solution shown in Figure 7.2. The previous example demonstrates that discretizations of the Stokes problem may lead to, in particular, strange instabilities in the pressure. In this chapter we will describe why this happens and several strategies to circumvent this behaviour.

7.2

Finite Element formulation

1 Let us first start with a weak formulation of Stokes problem: Find u ∈ HD,g and p ∈ L2 .

a(u, v) + b( p, v) b(q, u)

=

f ( v ),

= 0,

1 v ∈ HD,0

q ∈ L2 ,

60

Chapter 7. Stokes problem

Figure 7.2: Poiseuille flow solution obtained with quadratic continuous elements for the velocity and linear continuous elements for the pressure. The left figure shows the velocity while the right shows the pressure. Both the velocity and the pressure are correct.

Chapter 7. Stokes problem

61

where a(u, v)

=

b( p, v)

=

f (v)

=

Z Z Z

∇u : ∇v dx, p ∇ · v dx, f v dx +

Z ΩN

h v ds.

1 Here HD,g contains functions in H 1 with trace g on ∂Ω D . To obtain symmetry we have substituted pˆ = − p for the pressure and is referint to pˆ as p.

As before the standard finite element formulation follows directly from the weak formulation: Find uh ∈ Vg,h and ph ∈ Qh such that

= f (vh ), ∀ vh ∈ V0,h , b(qh , uh ) = 0, ∀ qh ∈ Qh .

a(uh , vh ) + b( ph , vh )

(7.5) (7.6)

Letting uh = ∑in=1 ui Ni , ph = ∑im=1 pi Li , vh = Nj , and qh = L j we obtain a linear system on the form

A B

BT 0

u p

=

f 0

(7.7)

Here Aij = a( Ni , Nj ) Bij = b( Li , Nj )

= =

Z Z

∇ Ni ∇ Nj dx,

(7.8)

∇ Li Nj dx.

(7.9)

Hence, A is n × n, while B is m × n, where n is the number of degrees of freedom for the velocity field, while m is the number of degrees of freedom for the pressure. 1

Is the system (7.7) invertible? For the moment, we assume that the submatrix A is invertible. This is typically the case for Stokes problem. We may then perform blockwise Gauss elimination: That is, we multiply the first equation with A−1 to obtain u = A −1 f − A −1 B T p Then, we then insert u in the second equation to get 0 = Bu = BA−1 f − BA−1 B T p i.e we have removed v and obtained an equation only involving p: BA−1 B T p = BA−1 f This equation is often called the pressure Schur complement. The question is then reduced to whether BA−1 B T is invertible. Consider the follwing two situations:

1 KAM:

Check transposed!

Chapter 7. Stokes problem

62 n

n BT

B

BT

A

m m

A

B

v.s

Clearly, the right most figure is not invertible since n m and the 0 in the lower right corner dominates. For the left figure on might expect that the matrix is non-singular since n m, but it will depend on A and B. We have already assumed that A is invertible, and we therefore ignore A−1 in BA−1 B T . The question is then whether BB T is invertible. m×m

=

B BT

As illustrated above, BB T will be a relatively small matrix compared to B T and A as long as n m. Therefore, BB T may therefore be non-singular. To ensure that BB T is invertible, it is necessary that kernel( B) = 0, where Bis m × n An equvialent statement is that max (v, B T p) > 0 v

∀ p.

(7.10)

Alternatively, max v

(v, B T p) ≥ βk pk ∀ p. kvk

(7.11)

Here, β > 0. We remark that (7.10) and (7.11) are equivalent for a finite dimensional matrix. However, in the infinite dimentional setting of PDEs (7.10) and (7.11) are different. Inequality (7.10) allow (v, B T p) to approach zero, while (7.11) requires a lower bound. For the Stokes problem, the corresponding condition is crucial: sup 1 v∈ HD,g

( p, ∇ · u) > βk pk0 > 0, k u k1

∀ p ∈ L2

(7.12)

Similarly, to obtain order optimal convergence rates, that is

ku − uh k1 + k p − ph k0 6 Chk kukk+1 + Dh`+1 k pk`+1 where k and ` are the ploynomial degree of the velocity and the pressure, respectively, the celebrated Babuska-Brezzi condition has to be satisfied: sup v∈Vh,g

( p, ∇ · v) > βk pk0 > 0, k v k1

∀ p ∈ Qh

(7.13)

Chapter 7. Stokes problem

63

We remark that the discrete condition (7.13) does not follow from (7.12). In fact, it is has been a major challenge in numerical analysis to determine which finite element pairs Vh and Qh that meet this condition. Remark 7.2.1. For saddle point problems on the form (7.5)-(7.6) four conditions have to be satisfied in order to have a well-posed problem: Boundedness of a: a(uh , vh ) ≤ C1 kuh kVh kvh kVh , ∀ uh , vh ∈ Vh , (7.14) and boundedness of b: b(uh , qh ) ≤ C2 kuh kVh kqh kQh ,

∀ uh ∈ Vh , qh ∈ Qh ,

(7.15)

Coersivity of a: a(uh , uh ) ≥ C3 kuh k2Vh ,

∀ u h ∈ Zh ,

(7.16)

where Zh = {uh ∈ Vh | b(uh , qh ) = 0, ∀qh ∈ Qh } and "coersivity" of b: sup uh ∈Vh

b(uh , qh ) ≥ C4 kqh kQh , kuh kVh

∀ qh ∈ Qh .

(7.17)

For the Stokes problem, (7.14)-(7.16) are easily verified, while (7.17) often is remarkably difficult unless the elements are designed to meet this condition. We remark also that condition (7.16) strictly speaking only needs to be valid on a subspace of Vh but this is not important for the Stokes problem.

7.3

Examples of elements

7.3.1

The Taylor-Hoood element

The Taylor-Hood elements are quadratic for the velocity and linear for pressure v : Ni p : Li

= aiv + biv x + civ y + div xy + eiv x2 + f iv y2 p p p = a i + bi x + c i y

and are continuous across elements

Figure 7.3: Taylor-Hood: quadratic element for v and linear element for p

For the Taylor-Hood element we have the following error estimate:

ku − uh k1 + k p − ph k0 6 Ch2 (kuk3 + k pk2 )

Chapter 7. Stokes problem

64

The generalization of the Taylor–Hood element to higher order, that is Pk − Pk−1 , satisfies the Brezzi conditions. 7.3.2

The Crouzeix–Raviart element

This element is linear in velocity and constant in pressure v : Ni p : Li

= aiv + biv x + civ y p = ai

The v element is continuous only in the mid-point of each side (see ??), and the p element is discontinuous. This ensures that it satisfies the Babuska-Brezzi condition.

Figure 7.4: Crouzeix–Raviart: mid-point linear element for v and constant element for p

For the Crouzeix–Raviart element we have the following error estimate:

ku − uh k1 + k p − ph k0 6 Ch(kuk2 + k pk1 ) 7.3.3

The P1-P0 element

If we on the other hand choose to locate the nodal points on the corners in the v element as shown in ?? (called a P1 − P0 element) the inf-sup condition is not satisfied and we get oscillations in the pressure term.

Figure 7.5: P1 − P0 : linear element for v and constant element for p

Chapter 7. Stokes problem 7.3.4

65

The P2-P0 element

P2 − P0 element is a popular element that satisfies the Brezzi conditions. And advantage with this approach is that mass conservation is maintained elementwise. In fact, Pk − Pk−2 , satisfies the Brezzi conditionsfor k ≥ 2. Here, the pressure element Pk−2 may in fact consist of both continuous and discontinuous polynomials. The discontinuous polynomials then has the advantage of providing mass conservation, albeit at the expence of many degrees of freedom compared with the continuous variant. 7.3.5

The Mini element

The mini element is linear in both velocity and pressure, but the velocity element contains a cubic bubble. Notice that elements that are linear in both v and p will not satisfy the inf-sup condition. Thus we add the extra bubble in v to give an extra degree of freedom as depicted in ??.

Figure 7.6: Mini: linear element with bubble for v and linear element for p

For the Mini element we have the following error estimate:

ku − uh k1 + k p − ph k0 6 C0 hkuk2 + C1 h2 k pk2

7.4 2

Stabilization techniques to circumvent the Babuska-Brezzi condition

Stabilization techniques typically replace the system: Au + B T p

= f Bu = 0

with an alternative system Au + B T p

= f Bu − eDp = ed,

where e is properly chosen. To see that we obtain a nonsingular system we again multiply the first equation with A−1 and 2 KAM:

Check sign!

Chapter 7. Stokes problem

66 then factorize:

= Bu = −1 T ( BA B + eD ) p = u

A −1 f − A −1 B T p BA−1 f − BA−1 B T p = ed + eDp BA−1 f − ed

If D is nonsingular then ( BA−1 B T + eD ) will be is nonsingular since both D and BA−1 B T are positive (only D is positive definite however). Factorizing for p we end up with a Velocity-Schur complement. Solving for p in the second equation and inserting the expression for p into the first equation we have

= (−eD )−1 (ed − Bu) ⇓ Au + B T (−eD )−1 (ed − Bu) = f 1 ( A + B T D −1 B ) u = f + D −1 d e p

( A + 1e B T D −1 B) is nonsingular since A is nonsingular and B T D −1 B is positive. At least, three techniques have been proposed for stabilization. These are: 1. ∇ · v = e∆p. Pressure stabilization. Motivated through mathematical intuition (from the convection-diffusion equation). 2. ∇ · v = −ep. Penalty method. Typically, one uses the Velocity-Schur complement ∂p

3. ∇· = −e ∂t . Artificial compressibility. A practical method as one adds the possibility for time stepping. In other words, these techniques sets D to be 1. D = A 2. D = M 3. D =

1 ∆t M

where A is the stiffness matrix (discrete laplace operator) and M is the mass matrix.

7.5

Exercises

Exercise 7.1. Show that the conditions (7.14)-(7.16) are satisfied for Vh = H01 and Qh = L2 . Exercise 7.2. Show that the conditions (7.14)-(7.16) are satisfied for Taylor–Hood and Mini discretizations. (Note that Crouzeix–Raviart is non-conforming so it is more difficult to prove these conditions for this case.) Exercise 7.3. Condition (7.17) is difficult to prove. However, if we assume that Vh = L2 and Qh = H01 , you should be able to prove it. (Hint: This is closely related to Poincare’s inequality.) Exercise 7.4. Test other finite elements for the Poiseuille flow problem. Consider P1 − P0 , P2 − P2 , P2 − P0 , as well as the Mini and Crouzeix–Raviart element. Exercise 7.5. Implement stabilization for the Poiseuille flow problem and use first order linear elements for both velocity and pressure.

Chapter 7. Stokes problem

67

Exercise 7.6. In the previous problem the solution was a second order polynomial in the velocity and first order in the pressure. We may therefore obtain the exact solution and it is therefore difficult to check order of convergence for higher order methods with this solution. In this exercise you should therefore implement the problem u = (sin(πy), cos(πx ), p = sin(2πx ), and f = −∆u − ∇ p. Test whether the approximation is of the expected order for P4 − P3 , P4 − P2 , P3 − P2 , and P3 − P1 . Exercise 7.7. Implement the Stokes problem with analytical solution u = (sin(πy), cos(πx ), p = sin(2πx ), and f = −∆u − ∇ p on the unit square. Consider the case where you have Dirichlet conditions on the sides ’x=0’, ’x=1’ and ’y=1’ while Neumann is used on the last side (this way we avoid the singular system associated with either pure Dirichlet or pure Neumann problems). Then determine the order of the approximation of wall shear stress on the side ’x=0’. The wall shear stress on the side ’x=0’ is ∇u · t where t = (0, 1) is the tangent along ’x=0’.

8

Efficient Solution Algorithms: Iterative methods and Preconditioning

By Anders Logg, Kent–Andre Mardal

To compute the solution of a partial differential equation, we often need to solve a system of linear of equations with a large number of uknowns. The accuracy of the solution increase with the number of unknowns used. Nowadays, unknowns in the order of millions to billions are routinely solved for without the use of (state-of-the-art) high-performance computing. Such computations are fasilitated by the enormous improvements in numerical algorithms and scientific software the last decades. It should be quite clear that naive Gaussian elimination can not be employed. For a naive Gaussian eliminations implementaton, the number of required floating point operations (FLOPS) scales as the cube of the number of uknowns. Hence, solving a problem with 106 unknowns would then require 1018 FLOPS which on a modern computer with e.g. 3 GHz still would take about 10 years. As we will see later, such problems may in common cases be solved in just a few seconds. There are two ingrediences in such efficient algorithms: iterative methods and preconditioning. Lets therefore consider the numerical solution of large linear systems, Au = b, where the linear system comes from discretization of PDEs. That is, A is a N × N matrix, and N is between 106 and 109 in typical simulations. Furthermore, the matrix is normally extremely sparse and contains only O( N ) nonzeros (see Exercise 8.1). It is important to notice that even though A is sparse A−1 will in general be full. This is a main reason to consider iterative methods.

8.1

The simplest iterative method: the Richardson iteration

The Richardson iteration1 is

un = un−1 − τ ( Aun−1 − b),

(8.1)

where τ is a relaxation parameter that must be determined. Clearly, the method is consistent in the sense that if un−1 = u, then un = u and the iterative method has converged to the exact solution. It is also clear that each iteration requires the evaluation of A on a vector, in addition to vector addition and scalar multiplication. Hence, one iteration requires the amount of O( N ) FLOPS and only O( N ) of memory. This is a dramatic improvement when compared Gaussian elimination at least if if the number of iterations are few. The key to obtain few iterations is preconditioning, but lets first consider 1 Richardson developed his method prior to computers. In his 1910 paper, where the focus is to predict stresses in a masonry dam, he describes how he uses humans as computational resources. He writes "So far I have paid piece rates for the operation [Laplacian] of about n/18 pence per coordinate point, n being the number of digits. As for the rate of working, one of the quickest boys average 2000 operations per week, for numbers of three digits, those done wrong being discounted."

69

Chapter 8. Iterative methods and Preconditioning

70 the Richardson’s method without.

The standard approach to analyze iterative methods is to look at what happens with the error. Let the error at the n’th iteration be en = un − u. As this is a linear system of equations, we may subtract u from both sides of (8.1) and obtain an equation for the iterative error: en = en−1 − τAen−1 . We may therefore quantify the error in terms of the L2 -norm as

ken k = ken−1 − τ Aen−1 k 6 k I − τ Akken−1 k. Clearly, if k I − τAk < 1 then the iteration will be convergent. Assuming for the moment that A is symmetric and positive definite, then the norm of A in general defined as k Ax k k Ak = max x kxk equals the largest eigenvalue of A, λmax . Furthermore, if we assume that the eigenvalues are ordered with respect to increasing value, such that λ0 and λ N are the smallest and largest eigenvalue, then the norm of I − τA, k( I − τA) x k k I − τAk = max x kxk is attained either for the smallest or largest eigenvalue as either (1 − τλ0 ) or −(1 − τλ N ). The optimal relaxation parameter τopt can be stated in terms of the eigenvalues, λi , of A. Minimum is attained when (1 − τopt λ0 ) = −(1 − τopt λ N ) which makes τopt = λ0 +2λ . N

Let the convergence factor ρ be defined as ρ = k I − τ Ak The convergence factor with an optimal relation is then ρ = k I − τAk = max |1 − τλi | = 1 − τλ0 = 1 − λi

Here, κ =

λN λ0

λ − λ0 κ−1 2λ0 = N = . λ0 + λ N λ N + λ0 κ+1

is the condition number.

We estimate the error reduction per iteation in terms of the convergence factor as,

ken k = k( I − τA)en−1 k ≤ ρken−1 k. which leads to

ken k 6 (

κ−1 n 0 ) k e k. κ+1

For iterative methods, we never iterate until the true solution exactly. Instead a convergence criteria needs to be choosen such that the error obtained by the iterative method is less than or at least comparable to the approximation error of the original system. Determining an appropriate convergence criteria is problem dependent and quite often challenging. Nevertheless, let us assume that we need to reduce the error by a factor of e, that is, we need

Chapter 8. Iterative methods and Preconditioning ken k k e0 k

71

< e. From the iteration, we have k e n k 6 ρ k e n −1 k 6 ρ n k e 0 k .

(8.2)

An estimate for the number of iterations is then obtained by assuming equality in the equation ken k (8.2) and ke0 k = e. Then the number of iterations needed to achieve the desired error is: n=

log e log e = . −1 log ρ log( K K +1 )

(8.3)

If n is independent of the resolution of the discretization, the computational cost of the algorithm is O( N ) in FLOPS and memory and the algorithm is order-optimal. The current analysis of the simplest iterative method there is, the Richardson iteration, shows that the efficiency of the method is determined by the condition number of the matrix. In the literature you will find a jungle of methods of which the following are the most famous: the Conjugate Gradient method, the Minimal Residual method, the BiCGStab method, and the GMRES method. It is remarkable that in general the convergence of these methods is determined by the condition number with one exception; the Conjugate Gradient method which often can be estimated in terms of the square root of the condition number. One main advantage is however that these methods do not require the determination of a τ to obtain convergence. Example 8.1. Eigenvalues of an elliptic problem in 1D and 2D. Let us consider an elliptic problem: u − ∆u ∂u ∂n

=

f,

= 0,

in Ω,

(8.4)

on ∂Ω.

(8.5)

Notice that the lower order term u in front of −∆u makes removes the singularity associated with Neumann conditions and that in the continuous case the smallest eigenvalue is 1 (associated with the eigenfunction that is a constant throughout Ω). The following code computes the eigenvalues using linear Lagrangian elements and Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

from dolfin import * from numpy import linalg for D in [1, 2]: for N in [4, 8, 16, 32]: if D == 1: mesh = UnitIntervalMesh(N) elif D == 2: mesh = UnitSquareMesh(N, N) V = FunctionSpace(mesh, "Lagrange", 1) u = TrialFunction(V) v = TestFunction(V) a = u*v*dx + inner(grad(u), grad(v))*dx A = assemble(a) e = linalg.eigvals(A.array()) e.sort() c = e[-1] / e[0] print "D=\%d, N=\%3d, min eigenvalue=\%5.3f, max eigenvalue=\%5.3f, cond. number=\%5.3f " \% (D, N, e[0], e[-1], c)

Chapter 8. Iterative methods and Preconditioning

72 yields the following output: Output 1 2 3 4 5 6 7 8

D=1, D=1, D=1, D=1, D=2, D=2, D=2, D=2,

N= N= N= N= N= N= N= N=

4, 8, 16, 32, 4, 8, 16, 32,

min min min min min min min min

eigenvalue=0.199, eigenvalue=0.111, eigenvalue=0.059, eigenvalue=0.030, eigenvalue=0.040, eigenvalue=0.012, eigenvalue=0.003, eigenvalue=0.001,

max max max max max max max max

eigenvalue=14.562, eigenvalue=31.078, eigenvalue=63.476, eigenvalue=127.721, eigenvalue=7.090, eigenvalue=7.735, eigenvalue=7.929, eigenvalue=7.982,

cond. cond. cond. cond. cond. cond. cond. cond.

number=73.041 number=279.992 number=1079.408 number=4215.105 number=178.444 number=627.873 number=2292.822 number=8693.355

The output shows that the condition number grows as h−2 in both 1D and 2D although the behaviour of the eigenvalues clearly are dimension dependent (see Exercise 8.2). The smallest eigenvalue decrease in both 1D and 2D as h → 0 but at different rates. To obtain eigenvalues corresponding the true eigenvalue we would need to solve a generalized eigenvalue problem as discussed in Chapter 3. Example 8.2. The Richardson iteration applied to a 1D Poisson equation. The Richardson iteration on the Poisson equation in 1D, discretized with finite difference method (FDM). Lu =

−u00 = f for x ∈ (0, 1) u (0) = u (1) = 0

(8.6)

Eigenvalues and eigenfunctions of Lu are λk = (kπ )2 and vk = sin(kπx ) for k ∈ N. When discretizing with FDM we get a Au = b system, where A is a tridiagonal matrix (A = tridiagonal(−1, 2, −1)) when the Dirichlet conditions have been eliminated. The discrete and continuous eigenvectors are the same, but the eigenvalues are a little bit different: λk = h42 sin2 ( kπh 2 ), where h is the step lenght ∆x. We find the smallest and largest discrete eigenvalues 4 λmin ( A) = π 2 , λmax ( A) = 2 . h Let τ =

2 λmax +λmin

then from the analysis above,

ken k 6 (

1−K n 0 ) k e k. 1+K

The below code perform the Richardson iteration for various resolution on the 1D Poisson problem and stops kr k when the convergence criteria krk k ≤ 10−6 is obtained. 0

Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

from numpy import * def create_stiffness_matrix(N): h = 1.0/(N-1) A = zeros([N,N]) for i in range(N): A[i,i] = 2.0/(h**2) if i > 0: A[i,i-1] = -1.0/(h**2) if i < N-1: A[i,i+1] = -1.0/(h**2) A = matrix(A) return A Ns = [10, 20, 40, 80, 160, 320] for N in Ns:

Chapter 8. Iterative methods and Preconditioning 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

A = create_stiffness_matrix(N) x = arange(0, 1, 1.0/(N)) f = matrix(sin(3.14*x)).transpose() u0 = matrix(random.random(N)).transpose() u_prev = u0

73

# creating matrix # right hand side # initial guess

eigenvalues = sort(linalg.eigvals(A)) # compute eigenvalues and tau lambda_max, lambda_min = eigenvalues[-1], eigenvalues[0] print "lambda_max ", lambda_max, " lambda_min ", lambda_min tau = 2/(lambda_max + lambda_min) norm_of_residual = 1.0 no_iterations= 0 while norm_of_residual > 1.0e-6: r = A*u_prev - f u = u_prev - tau*r u_prev = u norm_of_residual = r.transpose()*r no_iterations+=1

# make sure the iteration starts

# compute the residual # the Richardson iteration # check for norm of residual # count no iterations

print "N ", N, " number of iterations ", no_iterations

N 10 20 40 80 160 320

λmin 6.6 8.1 8.9 9.4 9.6 9.7

λmax 317 1435 6075 25*103 101*103 407*103

no. iterations 277 1088 4580 20 103 84 103 354 103

Estimated FLOPS 11 103 87 103 732 103 6.4 106 53 106 453 106

Table 8.1: The number of iterations of the Richardson iteration for solving a 1D Poisson problem. The FLOPS is estimated as the number of iterations times four times the number of unknowns, N, as the matrix is tridiagonal and there is both a matrix vector product (3N) and a vector addtion involved in (8.1).

We remark that in this example we have initialized the iteration with a random vector because such a vector contains errors at all frequencies. This is recommended practice when trying to estabilish a worst case scenario. Testing the iterative method against a known analytical solution with a zero start vector will often only require smooth error to be removed during the iterations and will therefore underestimate the complications of a real-world problem. 8.1.1

The stopping criteria

In the Example 8.2 we considered the Richardson iteration applied to a Poisson problem in 1D. We saw that in order to stop the iteration we had to choose a stopping criteria. Ideally we would like to stop when the error was small enough. The problem is that the error is uknown. In fact, since en = un − u we would be able to compute the exact solution if the error was known at the n’th iteration. What is computable is the residual at the n’th iteration, defined by r n = Aun − f . It is straightforward to show that Aen = r n .

Chapter 8. Iterative methods and Preconditioning

74

But computing en from this relation would require the inversion of A (which we try to avoid at all cost since it in general is a O( N 3 ) procedure). For this reason, the convergence criteria is typically expressed in terms of some norm of the residual. We may bound the n’th error as

ken k ≤ k A−1 kkr n k. However, estimating k A−1 k is in general challenging or computationally demanding and therefore usually avoided. To summarize, choosing an appropriate stopping criteria is in general challenging and in practice the choice has to be tailored to concrete application at hand by trial and error.

8.2

The idea of preconditioning

The basic idea of preconditioning is to replace Au = b with BAu = Bb. Both systems have the same solution (if B is nonsingular). However, B should be chosen as a cheap approximation of A−1 or at least in such a way that BA has a smaller condition number than A. Furthermore Bu should cost O( N ) operations to evaluate. Obviously, the preconditioner B = A−1 would make the condition number of BA be one and the Richardson iteration would converge in one iteration. However, B = A−1 is a very computationally demanding preconditioner. We would rather seek preconditioners that are O( N ) in both memory consumption and evaluation. The generalized Richardson iteration becomes

un = un−1 − τB( Aun−1 − b). The error in the n-th iteration is

(8.7)

en = en−1 − τBAen−1

and the iteration is convergent if k I − τBAk < 1. 8.2.1

Spectral equivalence and order optimal algorithms

Previously we stated that a good preconditioner is supposed to be similar to A−1 . The precise (and most practical) property that is required of a preconditioner is: • B should be spectrally equivalent with A−1 . • The evaluation of B on a vector, Bv, should be O( N ). • The storage of B should be O( N ). Definition 8.1. Two linear operators or matrices A−1 and B, that are symmetric and positive definite are spectral equivalent if: c1 ( A−1 v, v) 6 ( Bv, v) 6 c2 ( A−1 v, v)

∀v

If A−1 and B are spectral equivalent, then the condition number of the matrix BA is κ ( BA) 6

(8.8) c2 c1 .

Chapter 8. Iterative methods and Preconditioning

75

If the preconditioner B is spectrally equivalent with A−1 then the preconditioned Richardson iteration yields and order optimal algorithm. To see this, we note that en = ( I − τBA)en−1 . We can estimate the behavior of en by using the A-norm, ρ A = k I − τBAk A . Then we get

k e n k A 6 ρ A k e n −1 k A . Hence, if the condition number is independent of the discretization then the number of iterations as estimated earlier in (8.3) will be bounded independently of the discretization. In general, if A is a discretization of −∆ on a quasi-uniform mesh then both multigrid methods and domain decomposition methods will yield preconditioners that are spectrally equivalent with the inverse and close to O( N ) in evaluation and storage. The gain in using a proper preconditioner may provide speed-up of several orders of magnitude, see Example 8.3.

8.3

Krylov methods and preconditioning

For iterative methods, any method involving linear iterations may be written as a Richardson iteration with a preconditioner. However, iterative methods like Conjugate Gradient method, GMRES, Minimal Residual method, and BiCGStab, are different. These are nonlinear iterations where for instance the relaxation parameter τ changes during the iterations and are in fact often choosen optimally with respect to the current approximation. Avoiding the need to determine a fixed relaxation parameter prior to the iterations is of course a huge practical benefit. Still, the convergence in practice can usually be roughly estimated by the convergence analysis above for the Richardson iteration. We will not go in detail on these methods. We only remark that also with these methods it is essential with a good preconditioning technique in order for efficient computations. Furthermore, some of them have special requirements and in some cases it is well-known what to use. General Advice for usage of different methods: We classify the methods according to the matrices they are used to solve. • If a matrix is Symmetric Positive Definite(SPD), i.e., A = A T and x T Ax ≥ 0 ∀ x the the Conjugate Gradient method (CG) is the method of choice. CG needs an SPD preconditioner, see also Exercise 8.6. • If a matrix is Symmetric but indefinite, i.e. A = A T but both positive and negative eigenvalues then the Minimal Residual method (MR) is the best choice. MR requires an SPD preconditioner, see also Exercise 8.9. • If the matrix is positive, i.e., x T Ax ≥ 0 ∀ x which is often the case for convection-diffusion problems or flow problems then GMRES with either ILU or AMG are often good, but you might need to experiment, see also Exercise 8.7. • For matrices that are both nonsymmetric and indefinite there is a jungle of general purpose methods but they may be categories in two different families. In our experience the BiCGStab and GMRES methods are the two most prominent algorithms in these families. GMRES is relatively roboust but may stagnate. BiCGStab may break down. GMRES has a parameter ’the number of search vectors’ that may be tuned. Most linear algebra libraries for high performance computing like for instance PETSc, Trilinos, Hypre have these algorithms implemented. They are also implemented in various libraries in Python and Matlab. There is usually no need to implement these algorithms yourself.

Chapter 8. Iterative methods and Preconditioning

76

90

lu cg cg/ilu cg/amg

80 70

Time(sec)

60 50 40 30 20 10 00

200000

400000 600000 800000 Degrees of freedom

1000000

1200000

Figure 8.1: CPU time (in seconds) for solving a linear system of equation with N degrees of freedom (x-axis) for different solvers

Example 8.3 (CPU times of different algorithms). In this example we will solve the problem u − ∆u ∂u ∂n

f,

in

= 0,

on

∂Ω

=

where Ω is the unit square with first order Lagrange elements. The problem is solved with four different methods: • a LU solver, • Conjugate Gradient method, • Conjugate Gradient method with an ILU preconditioner, and • Conjugate Gradient method with an AMG preconditioner, for N = 322 , 642 , 1282 , 2562 , 5122 , 10242 , where N is the number of degrees of freedom. Figure 8.1 shows that there is a dramatic difference between the algorithms. In fact the Conjugate gradient (CG) with an AMG preconditioner is over 20 times faster then the slowest method, which is the CG solver without preconditioner. One might wonder why the LU solver is doing so well in this example when it costs O( N 2 ) – O( N 3 ) . However, if we increase the number of degrees of freedom, then the method would slow down compared to the other methods. The problem is then that it would require too much memory and the program would probably crash. Python code 1 2 3 4 5 6 7 8 9 10 11

from dolfin import * import time lu_time = []; cgamg_time = [] cg_time = []; cgilu_time = [] Ns = [] parameters["krylov_solver"]["relative_tolerance"] = 1.0e-8 parameters["krylov_solver"]["absolute_tolerance"] = 1.0e-8 parameters["krylov_solver"]["monitor_convergence"] = False parameters["krylov_solver"]["report"] = False parameters["krylov_solver"]["maximum_iterations"] = 50000

Chapter 8. Iterative methods and Preconditioning 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72

def solving_time(A,b, solver): U = Function(V) t0 = time.time() if len(solver) == 2: solve(A, U.vector(), b, solver[0], solver[1]); else: solve(A, U.vector(), b, solver[0]); t1 = time.time() return t1-t0 for N in [32, 64, 128, 256, 512, 1024]: Ns.append(N) mesh = UnitSquare(N, N) print " N ", N, " dofs ", mesh.num_vertices() V = FunctionSpace(mesh, "Lagrange", 1) u = TrialFunction(V) v = TestFunction(V) f = Expression("sin(x[0]*12) - x[1]") a = u*v*dx + inner(grad(u), grad(v))*dx L = f*v*dx A = assemble(a) b = assemble(L) t2 = solving_time(A,b, ["lu"]) print "Time for lu ", t2 lu_time.append(t2) t2 = solving_time(A, b, ["cg"]) print "Time for cg ", t2 cg_time.append(t2) t2 = solving_time(A, b, ["cg", "ilu"]) print "Time for cg/ilu ", t2 cgilu_time.append(t2) t2 = solving_time(A, b, ["cg", "amg"]) print "Time for cg/amg ", t2 cgamg_time.append(t2)

import pylab pylab.plot(Ns, lu_time) pylab.plot(Ns, cg_time) pylab.plot(Ns, cgilu_time) pylab.plot(Ns, cgamg_time) pylab.xlabel(’Unknowns’) pylab.ylabel(’Time(sec)’) pylab.legend(["lu", "cg", "cg/ilu", "cg/amg"]) pylab.show() pylab.loglog(Ns, lu_time) pylab.loglog(Ns, cg_time) pylab.loglog(Ns, cgilu_time) pylab.loglog(Ns, cgamg_time) pylab.legend(["lu", "cg", "cg/ilu", "cg/amg"])

77

Chapter 8. Iterative methods and Preconditioning

78 e\ N 1.0e-1 1.0e-2 1.0e-3 1.0e-4 1.0e-5 1.0e-6

64 1.3e-02 (1.1e-02) 1.2e-03 (1.0e-02) 3.6e-04 (1.1e-02) 3.4e-04 (1.2e-02) 3.4e-04 (1.2e-02) 3.4e-04 (1.3e-02)

128 1.4e-02 (3.5e-02) 2.0e-03 (3.7e-02) 3.1e-04 (3.9e-02) 8.5e-05 (4.5e-02) 8.4e-05 (4.7e-02) 8.4e-05 (5.0e-02)

256 8.8e-03 (1.4e-01) 1.3e-03 (1.5e-01) 2.6e-04 (1.6e-01) 2.4e-05 (1.8e-01) 2.1e-05 (1.9e-01) 2.1e-05 (2.1e-01) .

512 3.4e-03 (5.9e-01) 3.5e-03 (5.8e-01) 2.7e-04 (6.3e-01) 3.4e-05 (6.7e-01) 5.4e-06 (7.6e-01) 5.3e-06 (8.1e-01)

1024 1.1e-02 (2.5e+00) 3.7e-04 (2.7e+00) 3.7e-04 (2.7e+00) 1.4e-05 (2.9e+00) 2.8e-06 (3.1e+00) 1.3e-06 (3.3e+00)

Table 8.2: The error ku − unh k and corresponding CPU time in parentesis when solving a Poisson problem with homogenuous Dirichlet conditions.

73 74

pylab.savefig(’tmp_cpu.pdf’) pylab.show()

When we employ iterative methods, we need to specify the convergence criterion. This is often not an easy task. We have the continuous solution u, the discrete solution uh , and the appropriate discrete solution, unh found by an iterative method at iteration n. Obviously, we may estimate the error as

ku − unh k ≤ ku − uh k + kuh − unh k, and it does make sense that the values of ku − uh k and kuh − unh k are balanced. Still both terms may be hard to estimate in challenging applications. In practice, an appropriate convergence criterion is usually found by trial and error by choosing a stopping criterion based on the residual. Let us therefore consider a concrete example and consider ku − unh k as a function of the mesh resolution and a varying convergence criterion. Table 8.2 shows the error and the corresponding CPU timings when solving a Poisson problem at various resolutions and convergence criteria. Here, the convergence criteria is chosen as reducing kr k the relative residual, i.e., krk k by the factor e. This convergence criteria is very common, in particular 0 for stationary problems. There are several things to note here. For coarse resolution, N=64, the error stagnates somewhere between 1.0e − 3 and 1.0e − 4 and this stagnation marks where an appropriate stopping criteria is. It is however worth noticing that solving it to a criteria that is 1.0e − 6 is actually only about 30% more computationally demanding than 1.0e − 3. This is due to the fact that we have a very efficient method that reduces the error by about a factor 10 per iteration. If we consider the fine resolution, N=1024, we see that the stagnation happens later and that we may not even have reached the stagnating point even at e = 1.0e − 6. We also notice that the decreasing e in this case only lead to a moderate growth in CPU time. If we look closer at the table, we find that the stagnation point follows a staircase pattern. The code used to generate the table is as follows: Python code 1 2 3 4 5 6 7 8 9 10 11 12 13

from dolfin import * def boundary(x, on_boundary): return on_boundary parameters["krylov_solver"]["relative_tolerance"] = 1.0e-18 parameters["krylov_solver"]["absolute_tolerance"] = 1.0e-18 parameters["krylov_solver"]["monitor_convergence"] = True parameters["krylov_solver"]["report"] = True #parameters["krylov_solver"]["maximum_iterations"] = 50000 epss = [1.0e-1, 1.0e-2, 1.0e-3, 1.0e-4, 1.0e-5, 1.0e-6] data = {} Ns= [64, 128, 256, 512, 1024]

Chapter 8. Iterative methods and Preconditioning 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

79

#Ns= [8, 16, 32, 64] for N in Ns: for eps in epss: parameters["krylov_solver"]["relative_tolerance"] = eps mesh = UnitSquareMesh(N, N) V = FunctionSpace(mesh, "P", 1) u = TrialFunction(V) v = TestFunction(V) u_ex = Expression("sin(3.14*x[0])*sin(3.14*x[1])", degree=3) f = Expression("2*3.14*3.14*sin(3.14*x[0])*sin(3.14*x[1])", degree=3) a = inner(grad(u), grad(v))*dx L = f*v*dx U = Function(V) A = assemble(a) b = assemble(L) bc = DirichletBC(V, u_ex, boundary) bc.apply(A) bc.apply(b) t0 = time() solve(A, U.vector(), b, "gmres", "amg") t1 = time() cpu_time = t1-t0 error_L2 = errornorm(u_ex, U, ’L2’, degree_rise=3) data[(N, eps)] = (error_L2, cpu_time) for eps in epss: for N in Ns: D1, D2 = data[(N, eps)] print " %3.1e (%3.1e) " % (D1, D2), print ""

Example 8.4. Eigenvalues of the preconditioned system. It is often interesting to assess the condition number of the preconditioned system, BA. If the preconditioner is a matrix and the size of the system is moderate we may be able to estimate the condition number of BA using NumPy, Matlab or Octave. However, when our preconditioner is an algorithm representing a linear operator, such as in the case of multigrid, then this is not possible. However, as described in Saad [2003], egenvalues may be estimated as a bi-product of the Conjugate Gradient method. Without going into the algorithmic details of the implmementation, we mention that this is implemented in the FEniCS module cbc.block, see Mardal and Haga [2012]. The following code shows the usage. Python code 1 2 3 4 5 6 7 8 9 10 11

from from from from

dolfin import * block.iterative import ConjGrad block.algebraic.petsc import ML numpy import random

def boundary(x, on_boundary): return on_boundary class Source(Expression): def eval(self, values, x): dx = x[0] - 0.5; dy = x[1] - 0.5

Chapter 8. Iterative methods and Preconditioning

80 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

values[0] = 500.0*exp(-(dx*dx + dy*dy)/0.02) Ns = [8, 16, 32, 64, 128, 256, 512, 1024] for N in Ns: mesh = UnitSquareMesh(N,N) V = FunctionSpace(mesh, "CG", 1) # Define variational problem v = TestFunction(V) u = TrialFunction(V) f = Source(degree=3) a = dot(grad(v), grad(u))*dx L = v*f*dx bc = DirichletBC(V, Constant(0), boundary) # Assemble matrix and vector, create precondition and start vector A, b = assemble_system(a,L, bc) B = ML(A) x = b.copy() x[:] = random.random(x.size(0)) # solve problem and print out eigenvalue estimates. Ainv = ConjGrad(A, precond=B, initial_guess=x, tolerance=1e-8, show=2) x = Ainv*b e = Ainv.eigenvalue_estimates() print "N=%d iter=%d K=%.3g" % (N, Ainv.iterations, e[-1]/e[0])

In this example we see that the condition number increases logaritmic from 1.1 to 2.1 as the N increases from 8 to 1024. The AMG preconditioner has better performance and does not show logaritmic growth. For indefinite symmetric systems, the CGN method provides the means for estimating the condition number, c.f., the cbc.block documentation.

8.3.1

Insight from Functional Analysis

In the previous Chapters 6 and 7 we have discussed the well-posedness of the convection-diffusion equations and the Stokes problem. In both cases, the problems were well-posed - meaning that the differential operators as well as their inverse were continuous. However, when we discretize the problems we get matrices where the condition number grows to infinity as the element size goes to zero. This seem to contradict the well-posedness of our discrete problems and may potentially destroy both the accuracy and efficiency of our numerical algorithms. Functional analysis explains this apparent contradiction and explains how the problem is circumvented by preconditioning. Let us now consider the seeming contradiction in more precise mathematical detail for the Poisson problem with homogeneous Dirichlet conditions: Find u such that

−∆u = f , u = 0,

in Ω, on ∂Ω.

(8.9) (8.10)

We know from Lax-Milgram’s theorem that the weak formulation of this problem: Find u ∈ H01 such that a(u, v) = b(v), ∀v ∈ H01 .

Chapter 8. Iterative methods and Preconditioning

81

where a(u, v)

=

b(v)

=

Z ZΩ Ω

∇u · ∇v dx,

(8.11)

f v dx,

(8.12)

is well-posed because

≥ α|u|21 , ∀u ∈ H01 a(u, v) ≤ C |u|1 |v| H1 ∀u, v ∈ H01 .

a(u, u)

(8.13) (8.14)

Here | · |1 denotes the H 1 semi-norm which is known to be a norm on H01 due to Poincare. The well-posedness is in this case stated as

|u| H1 ≤ 0

1 k f k H −1 . α

(8.15)

In other words, −∆ takes a function u in H01 and returns a function f = −∆u which is in H −1 . We have that k f k−1 = k − ∆uk−1 ≤ C kuk1 . Also, −∆−1 takes a function f in H −1 and returns a function u = (−∆)−1 f which is in H01 . We have that kuk1 = k(−∆)−1 f k1 ≤ α1 k f k−1 . In fact, in this case α = C = 1. This play with words and symbols may be formalized by using operator norms that are equivalent with matrix norms. Let B ∈ Rn,m then

k BkL(Rm ,Rn ) = maxm x ∈R

k Bx kRn k x k Rm

Here L(Rm , Rn ) denotes the space of all m × n matrices. Analogously, we may summarize the mapping properties of −∆ and (−∆)−1 in terms of the conditions of Lax-Milgram’s theorem as

k − ∆kL( H1 ,H −1 ) ≤ C 0

and

k(−∆)−1 kL( H −1 ,H1 ) ≤ 0

1 . α

(8.16)

where L( X, Y ) denotes the space of bounded linear operators mapping X to Y. In other words, −∆ is a bounded linear map from H01 to H −1 and (−∆)−1 is a bounded linear map from H −1 to H01 . This is a crucial observation in functional analysis that, in contrast to the case of a matrix which is a bounded linear map from Rn to Rm , an operator may be map from one space to another. From Chapter 3 we know that the eigenvalues and eigenvectors of −∆ with homogeneous Dirichlet conditions on the unit interval in 1D are λk = (πk)2 and ek = sin(πkx ), respectively. Hence the eigenvalues of −∆ obviously tend to ∞ as k grows to ∞ and similarly the eigenvalues of (−∆)−1 accumulate at zero as k → ∞. Hence the spectrum of −∆ is unbounded and the spectrum of (−∆)−1 has an accumulation point at zero. Still, the operator −∆ and its inverse are bounded from a functional analysis point of view, in the sense of (8.18). Let us for the moment assume that we have access to an operator B with mapping properties that are inverse to that of A = −∆, i.e.,

k BkL( H −1 ,H1 ) 0

and

k B−1 kL( H1 ,H −1 ) . 0

(8.17)

Chapter 8. Iterative methods and Preconditioning

82 Then it follows directly that

k BAkL( H1 ,H1 ) 0

and

k( BA)−1 kL( H1 ,H1 ) . 0

(8.18)

and the condition number κ ( BA) =

maxi λi ( BA) = k BAkL( H1 ,H1 ) k( BA)−1 kL( H1 ,H1 ) 0 0 0 0 mini λi ( BA)

would be bounded. In the discrete case, the mapping property (8.17) translates to the fact that B should be spectrally equivalent with the inverse of A when B and A are both positive. While the above discussion is mostly just a re-iteration of the concept of spectral equivalence in the discrete case when the PDEs are elliptic, the insight from functional analysis can be powerful for systems of PDEs. Let us consider the Stokes problem from Chapter 7. The problem reads:

A As discussed in Chapter 7

u p

=

−∆ −∇ ∇· 0

u p

=

u p

A : H01 × L2 → H −1 × L2

was a bounded linear mapping with a bounded inverse. Therefore, a preconditioner can be constructed as (−∆)−1 0 B= 0 I Clearly

B : H −1 × L2 → H01 × L2

and is therefore a suitable preconditioner. However, we also notice that A and B −1 are quite different. A is indefinite and has positive and negative egenvalues, while B is clearly positive. Hence, the operators are not spectrally equivalent. Exercise 8.9 looks deeper into this construction of preconditioners for Stokes problem. A more comprehensive description of this technique can be found in Mardal and Winther [2011].

8.4

Exercises

Exercise 8.1. Estimate ratio of non-zeros per unknown of the stiffness matrix on the unit square with Lagrangian elements of order 1, 2, 3 and 4. Hint: the number of non-zeros can be obtained from the function ’nnz’ of a matrix object. Exercise 8.2. Compute the smallest and largest eigenvalues of the mass matrix and the stiffness matrix in 1D, 2D and 3D. Assume that the condition number is on the form κ ≈ Chα , where C and α may depend on the number of dimentions in space. Finally, compute the corresponding condition numbers. Does the condition number have the same dependence on the number of dimentions in space? Exercise 8.3. Repeat Exercise 8.2 but with Lagrange elements of order 1, 2 and 3. How does the order of the polynomial affect the eigenvalues and condition numbers. Exercise 8.4. Compute the eigenvalues the discretized Stokes problem using Taylor-Hood elements. Note that the problem is indefinite and that there are both positive and negative eigenvalues. An appropriate condition number is: maxi |λi | κ= mini |λi |

Chapter 8. Iterative methods and Preconditioning

83

where λi are the eigenvalues of A. Compute corresponding condition numbers for the Mini and Crouzeix-Raviart elements. Are the condition numbers similar? Exercise 8.5. Implement the Jacobi iteration for a 1D Poisson problem with homogeneous Dirichlet conditions. Start the iteration with an initial random vector and estimate the number of iterations required to reduce the L2 norm of the residual with a factor 104 . For relevant code see Example 8.3. Exercise 8.6. Test CG method without preconditioer, with ILU preconditioner and with AMG preconditioner for the Poisson problem in 1D and 2D with homogeneous Dirichlet conditions, with respect to different mesh resolutions. Do some of the iterations suggest spectral equivalence? Exercise 8.7. Test CG, BiCGStab, GMRES with ILU, AMG, and Jacobi preconditioning for

−µ∆u + v∇u = f in Ω u = 0 on ∂Ω Where Ω is the unit square, v = c sin(7x ), and c varies as 1, 10, 100, 1000, 10000 and the mesh resolution h varies as 1/8, 1/16, 1/32, 1/64. You may assume homogeneous Dirichlet conditions. Exercise 8.8. The following code snippet shows the assembly of the matrix and preconditioner for a Stokes problem: Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

a = inner(grad(u), grad(v))*dx + div(v)*p*dx + q*div(u)*dx L = inner(f, v)*dx # Form for use in constructing preconditioner matrix b = inner(grad(u), grad(v))*dx + p*q*dx # Assemble system A, bb = assemble_system(a, L, bcs) # Assemble preconditioner system P, btmp = assemble_system(b, L, bcs) # Create Krylov solver and AMG preconditioner solver = KrylovSolver("tfqmr", "amg") # Associate operator (A) and preconditioner matrix (P) solver.set_operators(A, P) # Solve U = Function(W) solver.solve(U.vector(), bb)

Here, "tfqmr" is a variant of the Minimal residual method and "amg" is an algebraic multigrid implementation in HYPRE. Test, by varying the mesh resolution, whether the code produces an order–optimal preconditioner. HINT: You might want to change the "parameters" as done in Example 8.3: Python code 1 2 3 4 5 6 7

# Create Krylov solver and AMG preconditioner solver = KrylovSolver("tfqmr", "amg") solver.parameters["relative_tolerance"] = 1.0e-8 solver.parameters["absolute_tolerance"] = 1.0e-8 solver.parameters["monitor_convergence"] = True solver.parameters["report"] = True solver.parameters["maximum_iterations"] = 50000

84

Chapter 8. Iterative methods and Preconditioning

Exercise 8.9. Consider the mixed formulation of linear elasticity that is appropriate when λ is large compared to µ. That is, Python code 1 2

a = inner(grad(u), grad(v))*dx + div(v)*p*dx + q*div(u)*dx - 1/lam*p*q*dx L = inner(f, v)*dx

Create two preconditioners: Python code 1 2

b1 = inner(grad(u), grad(v))*dx + p*q*dx b2 = inner(grad(u), grad(v))*dx + 1/lam*p*q*dx

Compare the efficiency of the different preconditioners when increasing the resolution and when λ → ∞. Can you explain why the first preconditioner is the best?

9

Linear elasticity and singular problems

By Anders Logg, Kent–Andre Mardal

9.1

Introduction

Let us consider an elastic body Ω0 that is being deformed under a load to become Ω. the deformation χ of a body in the undeformed state Ω0 to deformed state Ω. A point in the body has then moved u = x − X,

(9.1)

by definition this is displacement field. An illustration is shown in Figure 9.1.

Figure 9.1: Deforming body and displacement vector u.

Here, the domain Ω0 ⊂ R3 . From continuum mechanics, the elastic deformation is modelled by the stress tensor σ which is a symmetric 3 × 3 tensor. In equilibrium (i.e. no accelration terms) the Newton’s second law states the balance of forces as:

= σ·n =

div σ

f,

in Ω,

g,

on ∂Ω,

where f and g are body and surface forces, respectively and n is the outward normal vector. For small deformations of an isotropic media, Hooke’s law is a good approximation. Hooke’s law states that σ = 2µe(u) + λ tr(e(u))δ. Here, e(u) is the strain tensor or the symmetric gradient: e(u) =

1 (∇u + (∇u)T ), 2 85

Chapter 9. Linear elasticity and singular problems

86

µ and λ are the Lame constants, tr is the trace operator (the sum of the diagonal matrix entries), u is the displacement, and   1 0 0 δ =  0 1 0 . 0 0 1 From Newton’s second law and Hooke’s law we arrive directly at the equation of linear elasticity:

−2µ(∇ · e(u)) − λ∇(∇ · u) = 0.

(9.2)

The equation of linear elasticity (9.2) is an elliptic equation, but there are crucial differences between this equation and a standard elliptic equation like −∆u = f . These differences often cause problems in a numerical setting. To explain the numerical issues we will here focus on the differences between the three operator: 1. −∆ = ∇ · ∇ = div grad, 2. ∇ · e = ∇ · ( 12 (∇ + (∇ T )), 3. ∇ · tr e = ∇∇· = grad div. In particular, the differences between the operators in 1. and 2. is that ∇ · e has a larger kernel than −∆. The kernel consists of rigid motions and this leads to the usage of of one of Korn’s lemmas. This is the subject of Section 9.2. The kernel of the operators grad div and div grad are also different but here in fact the kernel of grad div is infinite dimentional and this has different consequences for the numerical algorithms which not necessarily pick up this kernel at all. This is discussed in Section 9.3.

9.2

The operator ∇ · e and rigid motions

The challenge with the handling of the ∇ · e operator is the handling of the singularity in the case of pure Neumann conditions. Let us therefore start with the simpler problem of the Poisson problem with Neumann conditions, i.e.,

−∆u = ∂u = ∂n

f,

in Ω,

(9.3)

g,

on ∂Ω.

(9.4)

It is easy to see that this problem is singular: Let u be a solution of the above equation, then u + C ∂(u+C ) with C ∈ R is also a solution because −∆u = ∆(u + C ) = f and ∂u = g. Hence, the solution ∂n = ∂n is only determined up to a constant. This means that the kernel is 1-dimentional. A proper formulation of the above problem can be obtained by using the method of Lagrange multipliers to fixate the element of the 1-dimentional kernel. The following weak formulation is well-posed: Find u ∈ H 1 and λ ∈ R such that

= f (v) ∀v ∈ H 1 b(u, γ) = 0, ∀γ ∈ R.

a(u, v) + b(λ, v)

(9.5) (9.6)

Chapter 9. Linear elasticity and singular problems

87

Here,

= (∇u, ∇v), b(λ, v) = (λ, v), a(u, v) f (v)

= ( f , v) +

Z

(9.7) (9.8) gvds.

(9.9)

∂Ω

Hence, the method of Lagrange multipliers turns the original problem into a saddle problem similar that in Chapter 7. However, in this case the Brezzi conditions are easily verified. We remark however that this formulation makes the problem indefinite rather than positive definite and for this reason alternative techniques such as pin-pointing is often used instead. We will not avocate this approach as it often causes numerical problems. Instead, we include a code example that demonstrate how this problem can be implemented with the method of Lagrange multipliers in FEniCS. Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

from dolfin import * mesh = UnitSquareMesh(64, 64) # Build function space with Lagrange multiplier P1 = FiniteElement("Lagrange", mesh.ufl_cell(), 1) R = FiniteElement("Real", mesh.ufl_cell(), 0) W = FunctionSpace(mesh, P1 * R) # Define variational problem (u, c) = TrialFunction(W) (v, d) = TestFunctions(W) f = Expression("10*exp(-(pow(x[0] - 0.5, 2) + pow(x[1] - 0.5, 2)) / 0.02)", degree=2) g = Expression("-sin(5*x[0])", degree=2) a = (inner(grad(u), grad(v)) + c*v + u*d)*dx L = f*v*dx + g*v*ds # Compute solution w = Function(W) solve(a == L, w) (u, c) = w.split() # Plot solution plot(u, interactive=True)

The kernel of the e operator is the space of rigid motions, RM. The space consists of translations and rotations. Rigid motions are on the following form in 2D and 3D: RM2D

= 

RM3D

=

a0 a1

+ a2  

−y x

a0 0  a1  +  − a3 a2 − a4

, a3 0 − a5

(9.10)   a4 x a5   y  . 0 z

(9.11)

Hence, the kernel in 2D is three-dimentional and may be expressed as above in terms of the degrees of freedom ( a0 , a1 , a2 ) whereas the kernel in 3D is six-dimentional ( a0 , . . . , a5 ). The Korn’s lemmas states suitable conditions for solvability. Here, we include two of the three inequalities typically listed. • The first lemma: For all u ∈ H 1 \ RM we have that ke(u)k ≥ C kuk1 .

Chapter 9. Linear elasticity and singular problems

88

• The second lemma: For all u ∈ H01 we have that ke(u)k ≥ C kuk1 . These lemmas should be compared with the Poincare’s lemma and the equivalence of the | · |1 and k · k1 norms. The second lemma states that when we have homogenous Dirichlet conditions we obtain a well-posed problem in a similar manner as for a standard elliptic problem. This case is often called fully-clamped conditions. For the Neumann problem, however, coersivity is not obtained unless we remove the complete set of rigid motions for the function space used for trial and test functions. Removing the rigid motions is most easily done by using the method of Lagrange multipliers. Let us now consider a weak formulation of the linear elasticity problem and describe how to implement it in FEniCS. For now we consider the case where λ and µ are of comparable magnitude. In the next section we consider the case where λ µ. The weak formulation of the linear elasticity problem is: Find u ∈ H 1 and λ ∈ RM such that

= f ( v ), ∀ v ∈ H 1 , b(u, γ) = 0, ∀γ ∈ RM .

a(u, v) + b(λ, v)

(9.12) (9.13)

Here, a(u, v)

= µ(e(u), e(v)) + λ(div u, div v) b(λ, v) = (λ, v), f (v)

= ( f , v) +

Z

gvds.

(9.14) (9.15) (9.16)

∂Ω

As we know from Chapter 7, this is a saddle point problem and we need to comply with the Brezzi conditions. Verifying these conditions are left as Exercise 9.4. Example 9.1. Our brain and spinal cord is floating in a water like fluid called the cerebrospinal fluid. While the purpose of this fluid is not fully known, it is known that the pressure in the fluid oscillates with about 5-10 mmHg during a cardic cycle which is approximately one second, c.f., e.g., Eide [2016]. The Youngs’ modulus has been estimated 16 kPa and 1 mmHg ≈ 133 Pa, c.f., e.g., Støverud et al. [2016]. To compute the deformation of the brain during a cardiac cycle we consider solve the linear elasticity problem with Neumann condtions set as pressure of 1 mm Hg and ... The following code shows the implmentation in FEniCS. The mesh of the brain was in this case obtained from a T1 magnetic ressonance image and segmentation was performed by using FreeSurfer. Python code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

from fenics import * mesh = Mesh(’mesh/res32.xdmf’)

# mm

plot(mesh,interactive=True) # Since the mesh is in mm pressure units in pascal must be scaled by alpha = (1e6)**(-1) alpha = (1e6)**(-1) # Mark boundaries class Neumann_boundary(SubDomain): def inside(self, x, on_boundry): return on_boundry mf = FacetFunction("size_t", mesh) mf.set_all(0) Neumann_boundary().mark(mf, 1) ds = ds[mf]

Chapter 9. Linear elasticity and singular problems 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77

# Continuum mechanics E = 16*1e3 *alpha nu = 0.25 mu, lambda_ = Constant(E/(2*(1 + nu))), Constant(E*nu/((1 + nu)*(1 - 2*nu))) epsilon = lambda u: sym(grad(u)) p_outside = 133 *alpha n = FacetNormal(mesh) f = Constant((0, 0, 0)) V = VectorFunctionSpace(mesh, "Lagrange", 1) # --------------- Handle Neumann-problem --------------- # R = FunctionSpace(mesh, ’R’, 0) # space for one Lagrange multiplier M = MixedFunctionSpace([R]*6) # space for all multipliers W = MixedFunctionSpace([V, M]) u, mus = TrialFunctions(W) v, nus = TestFunctions(W) # Establish a basis for the nullspace of RM e0 = Constant((1, 0, 0)) # translations e1 = Constant((0, 1, 0)) e2 = Constant((0, 0, 1)) e3 = Expression((’-x[1]’, ’x[0]’, ’0’)) # rotations e4 = Expression((’-x[2]’, ’0’, ’x[0]’)) e5 = Expression((’0’, ’-x[2]’, ’x[1]’)) basis_vectors = [e0, e1, e2, e3, e4, e5] a = 2*mu*inner(epsilon(u),epsilon(v))*dx + lambda_*inner(div(u),div(v))*dx L = inner(f, v)*dx + p_outside*inner(n,v)*ds(1) # Lagrange multipliers contrib to a for i, e in enumerate(basis_vectors): mu = mus[i] nu = nus[i] a += mu*inner(v, e)*dx + nu*inner(u, e)*dx # -------------------------------------------------------- # # Assemble the system A = PETScMatrix() b = PETScVector() assemble_system(a, L, A_tensor=A, b_tensor=b) # Solve uh = Function(W) solver = PETScLUSolver(’mumps’) # NOTE: we use direct solver for simplicity solver.set_operator(A) solver.solve(uh.vector(), b) # Split displacement and multipliers. Plot u, ls = uh.split(deepcopy=True) plot(u, mode=’displacement’, title=’Neumann_displacement’,interactive=True) file = File(’deformed_brain.pvd’) file 1 Make an Anzats on each interval

q

U (t) =

∑ U n,j λ j (t) q

(11.27)

j =0

Z tn

q

q −1

∑ U n,j λ j (t) · λi q

t n −1 j =0

(t) dt =

Z tn

q −1

t n −1

λi

(t) f i dt

(11.28)

This leads to a q × q linear system to be solved. It gives an implicit Runge–Kutta method for computing U n,j , j = 1, 2, . . . , q.

11.2

The FEM for u˙ + A(u) = f

Strong form u˙ + A(u) = f u(·, 0) = u0

in Ω × (0, T ], in Ω,

(11.29)

+ BC. Weak form Find u ∈ V such that Z TZ 0

vu˙ dx dt +

Z TZ 0

vA(u) dx dt =

Z TZ 0

v f dx dt

ˆ ∀ v ∈ V.

(11.30)

Chapter 11. The finite element method for time-dependent problems

101

Finite element method Find uhk ∈ Vhk such that Z TZ Ω

vu˙ hk dx dt +

Z TZ 0

vA(uhk ) dx dt =

Z TZ 0

∀ v ∈ Vˆhk .

v f dx dt

(11.31)

Solution algorithm Vhk = span{v = vh vk : vh ∈ Vh , vk ∈ Vk } Vˆhk = span{v = vh vk : vh ∈ Vˆh , vk ∈ Vˆk } Z TZ 0

Z T

Z

vk

vh vk u˙ hk dx dt + Ω

vh u˙ hk dx dt +

Z TZ

vh vk A(uhk ) dx dt =

Z T 0

vk

Z Ω

vh A(uhk ) dx dt =

(11.32)

Z TZ Ω

Z TZ 0

v f dx dt

(11.33)

v f dx dt

(11.34)

Take N

uhk ( x, t) =

∑ Uj (t)φj ( x )

(11.35)

j =1

vh = φi ,

i = 1, 2, . . . , N

and A linear. Then Z T 0

N

vk ∑ U˙ j j =1

Z Ω

φi φj dx dt +

Z T 0

N

∑ Uj

vk

Z

j =1

φi A(φj ) dx dt =

Z TZ 0

v f dx dt

(11.36)

We define the mass matrix M and the "stiffness matrix" Ak by Mij = Ak,ij =

Z ZΩ Ω

φi φj dx,

(11.37)

φi A(φj ) dx.

(11.38)

Thus, we obtain Z T 0

vk · MU˙ dt +

Z T 0

vk · Ak (U ) dt =

Z T 0

vk b dt,

(11.39)

where U = (U1 , U2 , . . . , UN ) T , b=

Z Ω

vh f dx.

(11.40) (11.41)

The overall solution algorithm is sketched in Figure 11.3. Example 11.2 (Heat equation). u˙ − ∆u = f

(11.42)

MU˙ − AU = b

(11.43)

FEM in space gives

Chapter 11. The finite element method for time-dependent problems

102

u˙ + A(u) = f

FEM in space Å

Ò

×

Ô

M U˙ + Ak (U ) = b

FEM in space-time Å

Ò

×

Ô

¹

Ø

Ñ

FEM in time Å

Ò

Ø

Ñ

Timestepping scheme Ì

Ñ

×

Ø

Ô

Ô

Ò

×

Ñ

Figure 11.3

Continuous Galerkin with q = 1 leads to kn A kn A n M+ U = M+ U n − 1 + k n bn . 2 2

(11.44)

References D. Braess. Finite elements: Theory, fast solvers, and applications in solid mechanics. Cambridge University Press, 2007. S. C. Brenner and R. Scott. The mathematical theory of finite element methods, volume 15. Springer Science & Business Media, 2008. P. K. Eide. The correlation between pulsatile intracranial pressure and indices of intracranial pressurevolume reserve capacity: results from ventricular infusion testing. Journal of neurosurgery, 125(6): 1493–1503, 2016. H. C. Elman, D. Silvester, and A. Wathen. Finite elements and fast iterative solvers: with applications in incompressible fluid dynamics. Oxford University Press, 2014. M. Kuchta, K.-A. Mardal, and M. Mortensen. On the singular neumann problem in linear elasticity. arXiv preprint arXiv:1609.09425, 2016. K.-A. Mardal and J. B. Haga. Block preconditioning of systems of pdes. Automated Solution of Differential Equations by the Finite Element Method, pages 643–655, 2012. K.-A. Mardal and R. Winther. Preconditioning discretizations of systems of partial differential equations. Numerical Linear Algebra with Applications, 18(1):1–40, 2011. A. Quarteroni and A. Valli. Numerical approximation of partial differential equations, volume 23. Springer Science & Business Media, 2008. J. Rauch. Partial Differential Equations. Springer, 1997. Y. Saad. Iterative methods for sparse linear systems. SIAM, 2003. K. H. Støverud, M. Alnæs, H. P. Langtangen, V. Haughton, and K.-A. Mardal. Poro-elastic modeling of syringomyelia–a systematic study of the effects of pia mater, central canal, median fissure, white and gray matter on pressure wave propagation and fluid movement within the cervical spinal cord. Computer methods in biomechanics and biomedical engineering, 19(6):686–698, 2016.

103

Lectures On Finite Element Method - PDFCOFFEE.COM (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Gov. Deandrea McKenzie

Last Updated:

Views: 5782

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Gov. Deandrea McKenzie

Birthday: 2001-01-17

Address: Suite 769 2454 Marsha Coves, Debbieton, MS 95002

Phone: +813077629322

Job: Real-Estate Executive

Hobby: Archery, Metal detecting, Kitesurfing, Genealogy, Kitesurfing, Calligraphy, Roller skating

Introduction: My name is Gov. Deandrea McKenzie, I am a spotless, clean, glamorous, sparkling, adventurous, nice, brainy person who loves writing and wants to share my knowledge and understanding with you.