You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information

# Numerical methods analysis of fluid flow

Systems of nonlinear partial differential equations (PDEs) are needed to describe realistic multiphase, multidimensional flow in a reservoir. As a rule, these equations cannot be solved analytically; they must be solved with numerical methods. This article provides an overview of these methods.

## One-dimensional convection/dispersion equation

To illustrate the mathematics, we discuss the numerical solution of the 1D convection/dispersion (C/D) equation

as introduced in vector analysis of fluid flow. As a reminder, *v* is velocity, *D* is dispersion, and *C* is concentration. **Eq. 1** is a good example to use because it illustrates many useful numerical methods that can be compared with the analytical solution given by **Eq. 2**.

We first introduce the concept of finite differences to convert **Eq. 1** to an equation that can be solved numerically. We then present a numerical representation of **Eq. 1** and illustrate its solution. For more details, you should consult the reservoir simulation page, as well as sources in the literature.^{[1]}^{[2]}^{[3]}^{[4]}^{[5]}^{[6]}^{[7]}

## Finite differences

One way to solve a PDE is to convert the PDE to finite-difference form. The finite-difference form is obtained by replacing the derivatives in the PDE with differences that are obtained from Taylor’s series. To illustrate the procedure, let us suppose that we know the function *f*(*x*) at two discrete points *x* = *x*_{i} and *x* = *x*_{i} + Δ*x*, where Δ*x* is an increment along the *x*-axis (**Fig. 1**). We can approximate the derivative d*f*(*x*)/d*x* at *x* = *x*_{i} by solving the Taylor’s series,

for d*f*(*x*)/d*x*. The result is

where *E*_{T} is the term

If we neglect *E*_{T}, we obtain the finite-difference approximation of the first derivative.

**Eq. 6** is an approximation because we are neglecting all of the terms in *E*_{T}, which is called the truncation error. In the limit as the increment Δ*x* approaches zero, the truncation error approaches zero, and the finite difference approaches the definition of the derivative.

The finite difference in **Eq. 6** is called a forward difference. Other differences are possible. Two that we use next are the backward difference,

and the centered difference,

**Eqs. 6** through **8** are all derived from Taylor’s series.

## Numerical solution of the 1D C/D equation

We illustrate the application of finite differences in a fluid flow problem by considering a specific finite-difference representation of the 1D C/D equation. For a more detailed discussion of the numerical analysis of **Eq. 1**, see Chap. 4 of Peacemen.^{[1]} In our example, we choose a backward difference for the time derivative in **Eq. 1**, a centered difference for the space derivative in the convection term, and a centered-in-time/centered-in-space difference for the dispersion term. **Eq. 1** is converted from a PDE to the difference equation

The subscripts of concentration *C* denote points in space, and the superscripts denote points in time. For example, the present time, *t*^{n}, is denoted by superscript *n* and future time *t*^{n+1} is denoted by *n*+1. The time increment is Δ*t* = *t*^{n+1} - *t*^{n}. Similarly, the space increment is Δ*x* = *x*^{i} + 1 - *x*^{i}. The concentration at time *t*^{n+1} and spatial location xi is denoted by .

The future concentration distribution is found from the current concentration distribution by rearranging **Eq. 9**. We collect terms in *C*^{n+1} on the left-hand side and terms in *C*^{n} on the right-hand side, thus

**Eq. 10** is now written in the form

where the coefficients are

All values of the variables in the coefficients are known at time tn. If we assume that the spatial subscript is in the range 1 ≤ *I* ≤*NX*, the system of finite-difference equations becomes

**Eq. 13** can be written in matrix form as

where is the *NX* × *NX* matrix of coefficients, is the column vector of unknown concentrations at time *t*^{n+1}, and is the column vector of right-hand-side terms that depend on known concentrations at time tn. Both column vectors and have *NX* elements.

The system of equations in **Eq. 14** is called a tridiagonal system because it consists of three lines of nonzero diagonal elements centered about the main diagonal. All other elements are zero. Techniques for solving the tridiagonal system of equations, using the Thomas algorithm, are presented in several sources.^{[1]}^{[2]}^{[3]}^{[4]}^{[8]} A solution of the set of equations for physical parameters *v* = 1 ft/day and *D* = 0.01 ft^{2}/day and finite-difference parameters Δ*x* = 0.1 ft and Δ*t* = 0.1 day is shown in **Fig. 2**. The difference between the analytical solution and the numerical solution is because of numerical dispersion,^{[1]}^{[9]}^{[10]} which is beyond the scope of this chapter. What interests us here is the appearance of matrices in the mathematics of fluid flow. Matrices are the subject of the next section.

## Matrices and linear algebra

An example of a matrix was introduced earlier for the 1D C/D equation. It is often easier to work with many fluid flow equations when they are expressed in terms of matrices. Our review follows the presentation in Fanchi.^{[9]} We begin our discussion with an example of a matrix that is used later in this chapter, namely the matrix associated with the rotation of a coordinate system. We then summarize some important properties of matrices and determinants and review the concepts of eigenvalues and eigenvectors from linear algebra.

### Rotation of a Cartesian coordinate system

**Fig. 3** illustrates a rotation of Cartesian coordinates from one set of orthogonal coordinates {*x*_{1}, *x*_{2}} to another set {*y*_{1}, *y*_{2}} by the angle *θ*. The equations relating the coordinate systems are

The set of equations in **Eq. 15** has the matrix form

which can be written as

with two elements each, and the rotation matrix is the 2 × 2 square matrix,

### Properties of matrices

In general, a matrix with *m* rows and *n* columns has the order *m* × *n* and is referred to as a *m* × *n* matrix. The entry in the *i*^{th} row and *j*^{th} column of the matrix is the *ij*^{th} element of the matrix. If the number of rows equals the number of columns so that *m* = *n*, the matrix is called a square matrix. On the other hand, if *m* ≠ *n*, the matrix is a rectangular matrix.

If the matrix has a single column so that *n* = 1, it is a column vector as in **Eq. 18**. If the matrix has a single row so that *m* = 1, it is a row vector. A row vector can be created from a column vector by taking the transpose of the column vector. For example, the transpose of the column vector in **Eq. 18** may be written as

where the superscript *T* denotes the transpose of the matrix. In general, we can write a *m* × *n* matrix with a set of elements {*a*_{ij}: *i* = 1, 2, ... *n*; *j* = 1, 2, ... *m*} as

The conjugate transpose of matrix is obtained by finding the complex conjugate of each element in and then taking the transpose of the matrix . This operation can be written as

where * denotes complex conjugation. Recall that the conjugate *z** of a complex number *z* is obtained by replacing the imaginary number with wherever it occurs. If all the elements of matrix are real, the conjugate transpose of matrix is equal to the transpose of matrix .

If the matrix is a square matrix and the elements of matrix satisfy the equality *a*_{ij} = *a*_{ji}, the matrix is called a symmetric matrix. A square matrix A¯¯ is Hermitian, or self-adjoint, if = _{+} (i.e, the matrix equals its conjugate transpose).

The set of elements {*a*_{ii}} of a square matrix is the principal diagonal of the matrix. The elements {*a*_{ji}} with *i* ≠ *j* are off-diagonal elements. The matrix is a lower triangular matrix if *a*_{ij} = 0 for *i* < *j*, and is an upper triangular matrix if *a*_{ij} = 0 for *i* > *j*. The matrix is a diagonal matrix if *a*_{ij} =0 for *i* ≠ *j*.

### Matrix operations

Suppose the matrices , , and with elements {*a*_{ij}}, {*b*_{ij}}, and {*c*_{ij}} have the same order *m* × *n*. We are using double underlines to denote matrices. Other notations are often used, such as boldface. The addition or subtraction of two matrices may be written as

The product of a matrix with a number *k* may be written as

The product of matrix with order *m* × *n* and matrix with order *n* × *p* is

where matrix has order *m* × *p*. Notice that matrix multiplication is possible only if the number of columns in equals the number of rows in . This requirement is always satisfied for square matrices.

The transpose of the product of two square matrices and is

and the adjoint of the product of two square matrices is

Notice that the product of two matrices may not be commutative (i.e., ≠ in general).

The identity matrix, , is a square matrix with all off-diagonal elements equaling zero and all diagonal elements equaling one. The identity matrix preserves the identity of a square matrix in matrix multiplication, thus

By contrast, a null matrix is a matrix in which all elements are zero. In this case, the product of the null matrix with a matrix is

The matrix, , is singular if the product of matrix with a column vector that has at least one nonzero element yields the null matrix; that is, is singular if

The concepts of identity matrix and matrix singularity are needed to define the inverse matrix. Suppose we have two square matrices and that satisfy the product

Notice that the matrices and commute. The matrix is nonsingular, and the matrix is the inverse of , thus = ^{-1}, where ^{-1} denotes the inverse of . **Eq. 32** can be written in terms of the inverse as

The inverse matrix is useful for solving systems of equations. For example, suppose we have a system of equations that satisfies

where the column vector and the matrix are known, and the column vector contains a set of unknowns. **Eq. 13** is an example for the 1D C/D equation. We can solve for in **Eq. 34** by premultiplying **Eq. 34** by ^{-1}. The result is

Of course, we have to know ^{-1} to find . This leads us to a discussion of determinants.

### Determinants, eigenvalues, and eigenvectors

The determinant (det) of a square matrix is denoted by det or | |. Two examples of determinants are the determinants of a 2×2 matrix and a 3×3 matrix. The determinant of a 2×2 matrix is

and the determinant of a 3×3 matrix is

Determinants are useful for determining if an inverse matrix ^{-1} exists. Inverse matrices are needed to solve finite-difference equations representing fluid flow. The condition det says that an inverse matrix ^{-1} exists, even though we may not know the elements of the inverse matrix. Determinants are also useful for determining eigenvalues and eigenvectors.

Eigenvalues and eigenvectors are useful for understanding the behavior of physical quantities that may be represented by a matrix. An example in fluid flow is permeability, which we discuss in more detail later in this chapter. First, we need to define the concepts of eigenvalue and eigenvector.

Eigenvalues are the values of *λ* in the eigenvalue equation

where is an *n* × *n* square matrix and is a column vector with n rows. The eigenvalue equation may be written as

where is the *n* × *n* identity matrix. **Eq. 39** has nonzero solutions, , if the eigenvalue, *λ*, is a characteristic root of , that is, *λ* must be a solution of

....................(40)**Eq. 40** is the characteristic equation of , and the *n* values of *λ* are the characteristic roots of the characteristic equation. The characteristic roots, *λ*, are obtained by expanding the determinant in **Eq. 40** into an *n*th-degree polynomial and then solving for the *n* values of *λ*. These concepts are illustrated in the next section.

## Nomenclature

## References

- ↑
^{1.0}^{1.1}^{1.2}^{1.3}Peaceman, D.W. 1977. Fundamentals of Numerical Reservoir Simulation. Oxford, UK: Elsevier Publishing. - ↑
^{2.0}^{2.1}Aziz, K. and Settari, A. 1979. Petroleum Reservoir Simulation. Essex, UK: Elsevier Applied Science Publishers. - ↑
^{3.0}^{3.1}Mattax, C.C. and Dalton, R.L. 1990. Reservoir Simulation, Vol. 13. Richardson, Texas: Monograph Series, SPE. - ↑
^{4.0}^{4.1}Ertekin, T., Abou-Kassem, J.H., and King, G.R. 2001. Basic Applied Reservoir Simulation, Vol. 7. Richardson, Texas: Textbook Series, SPE. - ↑ Munka, M. and Pápay, J. 2001. 4D Numerical Modeling of Petroleum Reservoir Recovery. Budapest, Hungary: Akadémiai Kiadó.
- ↑ Fanchi, J.R. 2006. Principles of Applied Reservoir Simulation, third edition. Burlington, Massachusetts: Gulf Professional Publishing/Elsevier.
- ↑ Fanchi, J.R. 2000. Integrated Flow Modeling, No. 49. Amsterdam, The Netherlands: Developments in Petroleum Science, Elsevier Science B.V.
- ↑ Chapra, S.C. and Canale, R.P. 2002. Numerical Methods for Engineers, fourth edition. Boston, Massachusetts: McGraw-Hill Book Co.
- ↑
^{9.0}^{9.1}Fanchi, J.R. 2006. Math Refresher for Scientists and Engineers, third edition. New York: Wiley Interscience. Cite error: Invalid`<ref>`

tag; name "r11" defined multiple times with different content - ↑ Lantz, R.B. 1971. Quantitative Evaluation of Numerical Diffusion (Truncation Error). SPE J. 11 (3): 315–320. SPE-2811-PA. http://dx.doi.org/10.2118/2811-PA

## Noteworthy papers in OnePetro

Use this section to list papers in OnePetro that a reader who wants to learn more should definitely read

## External links

Use this section to provide links to relevant material on websites other than PetroWiki and OnePetro