Skip to main content
Erschienen in: Journal of Scientific Computing 1/2017

Open Access 06.10.2016

A Roadmap to Well Posed and Stable Problems in Computational Physics

verfasst von: Jan Nordström

Erschienen in: Journal of Scientific Computing | Ausgabe 1/2017

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

All numerical calculations will fail to provide a reliable answer unless the continuous problem under consideration is well posed. Well-posedness depends in most cases only on the choice of boundary conditions. In this paper we will highlight this fact, and exemplify by discussing well-posedness of a prototype problem: the time-dependent compressible Navier–Stokes equations. We do not deal with discontinuous problems, smooth solutions with smooth and compatible data are considered. In particular, we will discuss how many boundary conditions are required, where to impose them and which form they should have in order to obtain a well posed problem. Once the boundary conditions are known, one issue remains; they can be imposed weakly or strongly. It is shown that the weak and strong boundary procedures produce similar continuous energy estimates. We conclude by relating the well-posedness results to energy-stability of a numerical approximation on summation-by-parts form. It is shown that the results obtained for weak boundary conditions in the well-posedness analysis lead directly to corresponding stability results for the discrete problem, if schemes on summation-by-parts form and weak boundary conditions are used. The analysis in this paper is general and can without difficulty be extended to any coupled system of partial differential equations posed as an initial boundary value problem coupled with a numerical method on summation-by parts form with weak boundary conditions. Our ambition in this paper is to give a general roadmap for how to construct a well posed continuous problem and a stable numerical approximation, not to give exact answers to specific problems.
Hinweise
A first version of this paper was given in AIAA paper No 2015-3197 on 22–26 June 2015 at 22nd AIAA Computational Fluid Dynamics Conference in Dallas, TX, USA.

1 Introduction

Initial boundary value problems are essential components for analysis in many areas of computational mechanics and physics. The examples that we have in mind in this paper include: the compressible and incompressible Navier–Stokes and Euler equations, the elastic wave equations, the Dirac equations, the Schrödinger equation, the heat equation, the advection–diffusion equation etc. In these examples, the equations themselves are given, and well posed for smooth Cauchy or smooth periodic problems. However, for initial boundary value problems, boundary conditions are needed, and they can be poorly chosen, leading to ill-posed problems.
To obtain a well posed initial boundary value problem, one needs to know: (i) how many boundary conditions are required, (ii) where to impose them and (iii) which form they should have. There are essentially two different methods available, namely the energy method and the Laplace transform method [1, 2]. The number of boundary conditions and where to place them can be determined using the Laplace transform method [3, 4]. However, the exact form of the boundary conditions cannot be obtained; information regarding that must come from other sources. The energy method, on the other hand, provides information on all the items (i–iii).
Throughout this paper we assume that we have unlimited access to accurate boundary data. We do not consider non-reflecting or absorbing boundary conditions [5, 6] even though we expect that the derived boundary conditions will perform reasonably well, provided that the corresponding data is known. As our prototype problem we consider the linearized time-dependent compressible Navier–Stokes equations. Due to its complicated incompletely parabolic character, it serves as a good example of how a roadmap to a well-posed and stable problem can be constructed.
It has been shown previously that a weak imposition of well-posed boundary conditions for finite difference [7, 8], finite volume [9, 10], spectral element [11, 12], discontinuous Galerkin [13, 14] and flux reconstruction schemes [15, 16] on summation-by-parts (SBP) form can lead to energy stability. We will show that the continuous analysis of well posed boundary conditions implemented with weak boundary procedures together with schemes on Summation-by-parts (SBP) form automatically leads to stability. A minimal additional analysis of the semi-discrete problem is necessary. The analysis in this paper is general and can without difficulty be extended to any coupled system of partial differential equations posed as an initial boundary value problem coupled with a numerical method on summation-by parts form with weak boundary conditions.
Note that this paper is not about deriving well posed boundary conditions for the time-dependent compressible Navier–Stokes equations, that is essentially covered in [1]. Instead we are aiming for a complete description leading to a well posed problem and a stable approximation, i.e. a roadmap for the whole computational chain. In order not to be tangled up in technical and yet unresolved theoretical difficulties, we do not deal with discontinuous problems, such as for example in [12, 17, 18], only sufficiently smooth solutions with related smooth and compatible data are considered.

2 The Governing Equations for the Prototype Problem

As our prototype problem, we consider the linearized frozen coefficient compressible Navier–Stokes equations in non-dimensional form
$$\begin{aligned} V_t+{\tilde{A}}V_x+{\tilde{B}}V_y+{\tilde{C}}V_z= {\tilde{F}}_x+{\tilde{G}}_y+{\tilde{H}}_z \end{aligned}$$
(2.1)
where
$$\begin{aligned} \begin{aligned} {\tilde{F}}&={\tilde{D}}_{11}{V}_{x}+{\tilde{D}}_{12}{V}_{y}+{\tilde{D}}_{13}{V}_{z}, \\ {\tilde{G}}&={\tilde{D}}_{21}{V}_{x}+{\tilde{D}}_{22}{V}_{y}+{\tilde{D}}_{23}{V}_{z}, \\ {\tilde{H}}&={\tilde{D}}_{31}{V}_{x}+{\tilde{D}}_{32}{V}_{y}+{\tilde{D}}_{33}{V}_{z}. \end{aligned} \end{aligned}$$
(2.2)
The subscripts txyz denotes partial differentiation with respect to time and space. In (2.1), \(V=(\rho , u, v, w, T)^T\) is the perturbation from the constant state (denoted by an overbar) around which we linearize.
The dependent variables are the density \(\rho \), the velocity components uvw in the xyz directions and the temperature T. The equations are written in non-dimensional form using the free stream density \(\rho _\infty \), the free stream velocity \(U_\infty \) and the free stream temperature \(T_\infty \). The shear and second viscosity coefficients \(\mu , \lambda \) as well as the coefficient of heat conduction \(\kappa \) are non-dimensionalized with the free stream viscosity \(\mu _\infty \). The pressure in non-dimensional form becomes, \(p/(\rho _\infty U^2_\infty ) = \rho T/(\gamma M^2_\infty )\). Also used later on are
$$\begin{aligned} M^2_\infty = \frac{U^{2}_\infty }{\gamma R T_\infty } ,\ \ \ Pr=\frac{\mu _\infty C_p}{\kappa _\infty } ,\ \ \ Re=\frac{\rho _\infty U_\infty L}{\mu _\infty }, \ \ \ \gamma =\frac{C_p}{C_v}, \ \ \ \varphi = \frac{\gamma \kappa }{Pr} \end{aligned}$$
(2.3)
where L is a length scale and M, Pr, \(Re=1/\epsilon \) and \(\gamma \) are the Mach, Prandtl and Reynolds numbers and ratio of specific heats respectively. The time scale is \(L/{U}_\infty \).
The first component in the viscous fluxes \({\tilde{F}},{\tilde{G}}\) and \({\tilde{H}}\) are zero, since there are no second derivatives in the continuity equation. This renders the system (2.1) incompletely parabolic [4], and suitable as a prototype problem. All matrices \({\tilde{D}}_{ij}\) are proportional to \(\epsilon \) and hence we can easily include hyperbolic problems by letting \(\epsilon =0\) in the same type of analysis.
Remark 2.1
To include the incompressible Navier–Stokes and Euler equations directly in the analysis, would require that we multiply the time-derivative in (2.1) with a singular mass matrix. For clarity and simplicity, we refrain from that complication.

3 Preliminaries

For ease of reading, we start with a brief outline of the main content.

3.1 The Roadmap

The step-by-step procedure leading to a well posed problem and a stable approximation involve the following steps.
1.
Symmetrization (Sect. 3.2) Unless an appropriate energy is known a priori (typically based on physical reasoning, see for example [19]), the energy method requires symmetric matrices, such that integration by parts can be performed.
 
2.
The Continuous Energy Method (Sect. 4.1) By multiplying with the solution and integrating over the domain, the energy rate consisting of a dissipative volume term and an indefinite quadratic boundary term is derived.
 
3.
The Number of Boundary Conditions (Sect. 4.2) The quadratic boundary term is rotated into diagonal form and divided up into positive and negative parts corresponding to the sign of the eigenvalues. The number of boundary conditions is equal to the number of negative eigenvalues in the quadratic form.
 
4.
The Form of the Boundary Conditions (Sect. 4.3) The characteristic variables that correspond to the negative eigenvalues are specified in terms of the corresponding positive ones together with boundary data.
 
5.
The Weak Implementation (Sect. 4.4) The boundary conditions are imposed weakly using a penalty formulation which is determined such that the boundary term becomes negative semi-definite for zero boundary data.
 
6.
The Discrete Approximation (Sect. 5.1) The continuous problem is discretized using SBP operators and weak boundary conditions with the same (except for obvious modifications) penalty matrices that were found in the continuous problem.
 
7.
The Discrete Energy Method (Sect. 5.2) Finally, stability is shown by applying the discrete energy method. The final form of the penalty terms are decided and it is shown that the discrete energy rate mimics the continuous one.
 

3.2 Symmetrization

The matrices related to the hyperbolic terms in (2.1) must be symmetric for the energy method to be applicable [1]. We choose the symmetrizer
$$\begin{aligned} S^{-1}=\text{ diag }\left[ \frac{{\bar{c}}^2}{\sqrt{\gamma }}, {\bar{\rho }} {\bar{c}}, {\bar{\rho }} {\bar{c}}, {\bar{\rho }} {\bar{c}}, \frac{{\bar{\rho }}}{\sqrt{\gamma (\gamma -1)M^4_\infty }}\right] , \end{aligned}$$
(3.1)
where \(\bar{c}\) is the speed of sound at the constant state.
Remark 3.1
The three-dimensional compressible Navier–Stokes equations with 12 matrices involved [see (2.1) and (2.2)], can be symmetrized by a single matrix [20]. This remarkable fact was complemented by the observation that there exist at least two different symmetrizers, based on either the hyperbolic or the parabolic terms. The symmetrizer (3.1) is related to the parabolic terms.
After symmetrizing (2.1) by multiplying it from the left with \(S^{-1}\), we obtain
$$\begin{aligned} U_t + {\bar{A}} U_x + {\bar{B}}U_y + {\bar{C}}U_z = {\bar{F}}_x + {\bar{G}}_y + {\bar{H}}_z \end{aligned}$$
(3.2)
where
$$\begin{aligned} {\bar{A}}= & {} \left( \begin{array}{ccccc} {\bar{u}}&{}\frac{{\bar{c}}}{\sqrt{\gamma }}&{}0&{}0&{}0\\ \frac{{\bar{c}}}{\sqrt{\gamma }}&{}{\bar{u}}&{}0&{}0&{}{\bar{c}}\sqrt{\frac{\gamma -1 }{\gamma }}\\ 0&{}0&{}{\bar{u}}&{}0&{}0\\ 0&{}0&{}0&{}{\bar{u}}&{}0\\ 0&{}{\bar{c}}\sqrt{\frac{\gamma -1 }{\gamma }}&{}0&{}0&{}{\bar{u}} \end{array}\right) \ {\bar{D}}_{11}=\frac{\epsilon }{{\bar{\rho }}}\left( \begin{array}{ccccc} 0&{}0&{}0&{}0&{}0\\ 0&{}2\bar{\mu }+\bar{\lambda }&{}0&{}0&{}0\\ 0&{}0&{}\bar{\mu }&{}0&{}0\\ 0&{}0&{}0&{}\bar{\mu }&{}0\\ 0&{}0&{}0&{}0&{}\bar{\varphi } \end{array}\right) \end{aligned}$$
(3.3)
$$\begin{aligned} {\bar{B}}= & {} \left( \begin{array}{ccccc} {\bar{v}}&{}0&{}\frac{{\bar{c}}}{\sqrt{\gamma }}&{}0&{}0\\ 0&{}{\bar{v}}&{}0&{}0&{}0\\ \frac{{\bar{c}}}{\sqrt{\gamma }}&{}0&{}{\bar{v}}&{}0&{}{\bar{c}}\sqrt{\frac{\gamma -1 }{\gamma }}\\ 0&{}0&{}0&{}{\bar{v}}&{}0\\ 0&{}0&{}{\bar{c}}\sqrt{\frac{\gamma -1 }{\gamma }}&{}0&{}{\bar{v}} \end{array}\right) \ {\bar{D}}_{22}=\frac{\epsilon }{{\bar{\rho }}}\left( \begin{array}{ccccc} 0&{}0&{}0&{}0&{}0\\ 0&{}\bar{\mu }&{}0&{}0&{}0\\ 0&{}0&{}2\bar{\mu }+\bar{\lambda }&{}0&{}0\\ 0&{}0&{}0&{}\bar{\mu }&{}0\\ 0&{}0&{}0&{}0&{}\bar{\varphi } \end{array}\right) \end{aligned}$$
(3.4)
$$\begin{aligned} {\bar{C}}= & {} \left( \begin{array}{ccccc} {\bar{w}}&{}0&{}0&{}\frac{{\bar{c}}}{\sqrt{\gamma }}&{}0\\ 0&{}{\bar{w}}&{}0&{}0&{}0\\ 0&{}0&{}{\bar{w}}&{}0&{}0\\ \frac{{\bar{c}}}{\sqrt{\gamma }}&{}0&{}0&{}{\bar{w}}&{}{\bar{c}}\sqrt{\frac{\gamma -1 }{\gamma }}\\ 0&{}0&{}0&{}{\bar{c}}\sqrt{\frac{\gamma -1 }{\gamma }}&{}{\bar{w}} \end{array}\right) \ {\bar{D}}_{33}=\frac{\epsilon }{{\bar{\rho }}}\left( \begin{array}{ccccc} 0&{}0&{}0&{}0&{}0\\ 0&{}\bar{\mu }&{}0&{}0&{}0\\ 0&{}0&{}\bar{\mu }&{}0&{}0\\ 0&{}0&{}0&{}2\bar{\mu }+\bar{\lambda }&{}0\\ 0&{}0&{}0&{}0&{}\bar{\varphi } \end{array}\right) \end{aligned}$$
(3.5)
$$\begin{aligned} {\bar{D}}_{12}= & {} {\bar{D}}^T_{21}=\frac{\epsilon }{{\bar{\rho }}}\left( \begin{array}{ccccc} 0&{}0&{}0&{}0&{}0\\ 0&{}0&{}\bar{\lambda }&{}0&{}0\\ 0&{}\bar{\mu }&{}0&{}0&{}0\\ 0&{}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0&{}0 \end{array}\right) \ \ \ \ \ {\bar{D}}_{13}={\bar{D}}^T_{31}=\frac{\epsilon }{{\bar{\rho }}}\left( \begin{array}{ccccc} 0&{}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}\bar{\lambda }&{}0\\ 0&{}0&{}0&{}0&{}0\\ 0&{}\bar{\mu }&{}0&{}0&{}0\\ 0&{}0&{}0&{}0&{}0 \end{array}\right) \end{aligned}$$
(3.6)
$$\begin{aligned} {\bar{D}}_{23}= & {} {\bar{D}}^T_{32}=\frac{\epsilon }{{\bar{\rho }}}\left( \begin{array}{ccccc} 0&{}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}\bar{\lambda }&{}0\\ 0&{}0&{}\bar{\mu }&{}0&{}0\\ 0&{}0&{}0&{}0&{}0 \end{array}\right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {U}=\left( \begin{array}{c} {\bar{c}}^2\rho /\sqrt{\gamma }\ \\ {\bar{\rho }} {\bar{c}} u \\ {\bar{\rho }} {\bar{c}} v \\ {\bar{\rho }} {\bar{c}} w \\ {\bar{\rho }}T/\sqrt{\gamma (\gamma -1)M_\infty ^4}. \end{array}\right) \end{aligned}$$
(3.7)
In (3.2)–(3.7), \(U=S^{-1}V\), \({\bar{A}}=S^{-1}{\tilde{A}}S\), \({\bar{B}}=S^{-1}{\tilde{B}}S\), \({\bar{C}}=S^{-1}{\tilde{C}}S\) and \({\bar{D}}_{ij}=S^{-1}{\tilde{D}}_{ij}S\).

3.3 Well Posed Problems and Stability

In this section, we define the concepts needed in the rest of the paper, and most of the material can be found in [2, 21, 22]. Roughly speaking, an initial boundary value problem is well posed if a unique solution that depends continuously on the initial and boundary data exists. Consider the following general linear initial boundary value problem
$$\begin{aligned} W_t+\mathscr {P}W= & {} \mathbf {F},\quad \mathbf {x} \in {\varOmega }, \quad t\ge 0 \nonumber \\ \mathscr {L}W= & {} \mathbf {g}, \quad \mathbf {x} \in \partial {\varOmega }, \quad t\ge 0 \nonumber \\ W= & {} \mathbf {f},\quad \mathbf {x} \in {\varOmega }, \quad t= 0 \end{aligned}$$
(3.8)
where W is the solution, \(\mathscr {P}\) is the spatial differential operator and \(\mathscr {L}\) is the boundary operator. In this paper, \(\mathscr {P}\) and \(\mathscr {L}\) are linear operators, \(\mathbf {F}\) is a forcing function, and \(\mathbf {g}\) and \(\mathbf {f}\) are boundary and initial functions, respectively. \(\mathbf {F}\), \(\mathbf {g}\) and \(\mathbf {f}\) are the known data of the problem. In this paper we consider smooth and compatible data leading to sufficiently smooth solutions. The initial boundary value problem (3.8) is posed on the domain \( {\varOmega }\) with boundary \(\partial {\varOmega }\).
We introduce the scalar product and norm as
$$\begin{aligned} (U,V)_{{\varOmega }}= \displaystyle \int _{{\varOmega }} U^T H V \, dx \, dy \, dz, \quad \Vert U(\cdot ,t)\Vert ^2_{{\varOmega }}=(U,U)_{{\varOmega }}, \end{aligned}$$
(3.9)
for real valued vector functions UV and a positive definite symmetric matrix H.
Definition 3.1
Let be \(\mathscr {V}\) be the space of differentiable functions satisfying the boundary conditions \(\mathscr {L}W= 0\) on \(\mathbf {x} \in \partial {\varOmega }\). The differential operator \(\mathscr {P}\) is semi-bounded if for all \(W \in \mathscr {V}\) the inequality
$$\begin{aligned} (W,\mathscr {P}W)_{{\varOmega }}\ge - \alpha \Vert W(\cdot ,t)\Vert ^2_{{\varOmega }} \end{aligned}$$
(3.10)
holds, where the constant \(\alpha \) is independent of W.
If a solution to (3.8) exist, semi-boundedness of \(\mathscr {P}\) leads directly to well-posedness. However, with too many boundary conditions, existence is not guaranteed. Consequently, a more restrictive definition is required.
Definition 3.2
The differential operator \(\mathscr {P}\) is maximally semi-bounded if it is semi-bounded in the function space \(\mathscr {V}\) but not semi-bounded in any space with fewer boundary conditions.
The energy method (which we will describe in detail in Sect. 4.1 below) and maximally semi-bounded operators lead directly to well-posed problems.
Definition 3.3
The initial boundary value problem (3.8) with \(\mathbf {F}=\mathbf {g}=0\) is well posed if for every \(\mathbf {f}\in C^{\infty }\) that vanishes in a neighborhood of \(\partial {\varOmega }\), a unique smooth solution exists that satisfies the estimate
$$\begin{aligned} \Vert W(\cdot ,t)\Vert ^2_{{\varOmega }}\le K_1^c e^{\alpha _c t} \Vert \mathbf {f}\Vert ^2_{{\varOmega }} \end{aligned}$$
(3.11)
where the constants \(K_1^c\) and \(\alpha _c\) are bounded independently of \(\mathbf {f}\).
For certain classes of problems with specific types of boundary conditions, the energy method in combination with maximally semi-boundedness operators lead to even stronger estimates, and so called strongly well-posed problems.
Definition 3.4
The initial boundary value problem (3.8) is strongly well posed, if it is well-posed and
$$\begin{aligned} \Vert W(\cdot ,t)\Vert ^2_{{\varOmega }}\le K_2^c(t)\left( \Vert \mathbf {f}\Vert ^2_{{\varOmega }}+ \int _0^t(\Vert \mathbf {F}(\cdot ,\tau )\Vert ^2_{{\varOmega }}+\Vert \mathbf {g}(\tau )\Vert ^2_{\partial {\varOmega }})d\tau \right) \end{aligned}$$
(3.12)
holds. The function \(K_2^c(t)\) for limited time is bounded independently of \(\mathbf {f},\mathbf {F}\) and \(\mathbf {g}\).
Remark 3.2
Well-posedness of (3.8) requires that an appropriate number of boundary conditions (number of linearly independent rows in \(\mathscr {L}\)) with the correct form of \(\mathscr {L}\) (the rows in \(\mathscr {L}\) have appropriate elements) is used. Too many boundary conditions means that existence is not possible (the differential operator is not maximally semi-bounded), and too few that neither the estimates (3.11)–(3.12) nor uniqueness can be obtained.
Remark 3.3
Generally speaking, the linear theory for well-posedness is complete. The theory for smooth nonlinear problems can be extended by the linearization and localisation principles, see [23],[24] for details. The fully nonlinear theory, necessary for problems with discontinuities, is incomplete. Entropy estimates can be used to bound the solution, see for example [12, 17, 18], but neither uniqueness nor existence follows. In this paper we do not consider problems with discontinuities.
Closely related to well-posedness is the concept of stability. The semi-discrete version of (3.8) is
$$\begin{aligned} (W_j)_t+{\mathscr {Q}} W_j= & {} \mathbf {F}_j,\quad \mathbf {x}_j \in {\varOmega }, \quad t\ge 0 \nonumber \\ {\mathscr {M}} W_j= & {} \mathbf {g}_j, \quad \mathbf {x}_j \in \partial {\varOmega }, \quad t\ge 0 \nonumber \\ W_j= & {} \mathbf {f}_j,\quad \mathbf {x}_j \in {\varOmega }, \quad t= 0. \end{aligned}$$
(3.13)
The difference operator \(\mathscr {Q}\) approximates the differential operator \(\mathscr {P}\) and the discrete boundary operator \(\mathscr {M}\) approximates \(\mathscr {L}\). \(\mathbf {F}_j\), \(\mathbf {g}_j\) and \(\mathbf {f}_j\) are the known smooth compatible data of the problem (3.8) injected on the grid \( \mathbf {x}_j=(x_j,y_j,z_j)\). The difference approximation (3.13) is a consistent approximation of (3.8).
We now define semi-bounded discrete operators in analogy with differential operators. Let the volume element corresponding to the \(j\mathrm{th}\) node be \({\varDelta }{\varOmega }_j\). The discrete scalar product and norm are defined by
$$\begin{aligned} (U,V)_{{\varOmega }_h}= \displaystyle \sum _{j=1}^{j=N} U^T_j H_j V_j {\varDelta }{\varOmega }_j, \quad \Vert U(\cdot ,t)\Vert ^2_{{\varOmega }_h}=(U,U)_{{\varOmega }_h}, \end{aligned}$$
(3.14)
for real valued vector functions \(U_j,V_j\) and positive definite symmetric matrices \(H_j\).
Definition 3.5
Let be \(\mathscr {V}_h\) be the space of grid vector functions satisfying the boundary conditions \(\mathscr {M} W= 0\) on \(\mathbf {x}_j \in \partial {\varOmega }\). The discrete operator \(\mathscr {Q}\) is semi-bounded if for all \(W \in \mathscr {V}_h\) the inequality
$$\begin{aligned} (W,\mathscr {Q}W)_{{\varOmega }_h }\ge - \alpha \Vert W(\cdot ,t)\Vert ^2_{{\varOmega }_h} \end{aligned}$$
(3.15)
holds, where the constant \(\alpha \) is independent of W and \(h=min_{i \ne j}{\vert \mathbf {x}_j - \mathbf {x}_{i} \vert }\).
Unlike in the continuous case, the problem with existence and uniqueness related to the number of boundary conditions does not exist in the discrete case. The number of boundary conditions (including numerical ones) is simply equal to the number of linearly independent conditions in \(\mathscr {M} W_j=\mathbf {g}_j\) that are required for the semi-discrete system to have a unique solution. Different numerical boundary conditions can lead to different solutions on coarse grids. However, for sufficiently fine meshes and stable approximations, the numerical solution will converge to the continuous unique solution. Hence we need not restrict semi-boundedness to maximal semi-boundedness as was done for the continuous case above.
The discrete energy method (which we will describe in detail in Sect. 5.2 below) and semi-bounded operators lead directly to stability.
Definition 3.6
The semi-discrete approximation (3.13) with \(\mathbf {F}_j=\mathbf {g}_j=0\) is stable for every projection \(\mathbf {f}_j\) of \(\mathbf {f}\in C^{\infty }\) that vanishes in a neighborhood of \(\partial {\varOmega }\), if the solution \(W_j\) satisfies the estimate
$$\begin{aligned} \Vert W_j(t)\Vert ^2_{{\varOmega }_h}\le K_1^d e^{\alpha _d t} \Vert \mathbf {f}_j\Vert ^2_{{\varOmega }_h} \end{aligned}$$
(3.16)
where the constants \(K_1^d\) and \(\alpha _d\) are bounded independently of \(\mathbf {f}_j\) and \(h=min_{i \ne j}{\vert \mathbf {x}_j - \mathbf {x}_{i} \vert }\).
As in the continuous case, for certain classes of problems with specific types of boundary conditions, the energy method in combination with semi-bounded operators can lead to even stronger estimates, and so called strongly stable problems.
Definition 3.7
The semi-discrete approximation (3.13) is strongly stable, if it is stable and
$$\begin{aligned} \Vert W_j(t)\Vert ^2_{{\varOmega }_h}\le K_2^d(t)\left( \Vert \mathbf {f}_j\Vert ^2_{{\varOmega }_h}+ \int _0^t(\Vert \mathbf {F}_j(\cdot ,\tau )\Vert ^2_{{\varOmega }_h}+\Vert \mathbf {g}_j(\tau )\Vert ^2_{\partial {\varOmega }_h})d\tau \right) \end{aligned}$$
(3.17)
holds. The function \(K_2^d(t)\) for a limited time is bounded independently of \(\mathbf {f}_j,\mathbf {F}_j,\mathbf {g}_j\) and \(h=min_{i \ne j}{\vert \mathbf {x}_j - \mathbf {x}_{i} \vert }\).
The definitions of well-posedness and stability above are strikingly similar. However, the bounds in the corresponding estimates need not be the same. The following definition connects the growth rates of the continuous and semi-discrete solutions.
Definition 3.8
Assume that (3.8) is well-posed with \(\alpha _c\) in (3.11) and that the semi-discrete approximation (3.13) is stable with \(\alpha _d\) in (3.16). If \(\alpha _d\le \alpha _c+\mathcal {O}(h)\) for \(h\le h_0\) we say that the approximation is strictly stable.
Remark 3.4
The norms in Definitions 3.33.7 can be quite general but in this paper we use \(\Vert \phi \Vert ^2_{{\varOmega }}=\int _{{\varOmega }} \phi ^T \phi dxdydz \approx \phi ^T H \phi =\Vert \phi \Vert ^2_{{\varOmega }_h}\) and \(\Vert \phi \Vert ^2_{\partial {\varOmega }}=\oint _{\partial {\varOmega }} \phi ^T \phi ds \approx \phi ^T K \phi =\Vert \phi \Vert ^2_{\partial {\varOmega }_h}\). The matrices H and K define appropriate quadrature rules and \(\phi \) is a smooth function. More details on the definitions above are given in [2, 21].

4 The Continuous Problem

The initial boundary value problem we will consider in this paper is obtained by adding the boundary and initial conditions to (3.2)
$$\begin{aligned} \begin{array}{rclll} U_t + {\bar{A}} U_x + {\bar{B}}U_y + {\bar{C}}U_z &{} = &{} {\bar{F}}_x + {\bar{G}}_y + {\bar{H}}_z,&{} (x,y,z) \in {\varOmega }, &{} t \ge 0 \\ HU &{} = &{} g,&{} (x,y,z) \in \delta {\varOmega }, &{} t\ge 0 \\ U &{}=&{} f,&{} (x,y,z) \in {\varOmega },&{} t=0. \end{array} \end{aligned}$$
(4.1)
The solution and the matrices in (4.1) are given by (2.2)–(3.7). The data g and f are smooth compatible boundary and initial data respectively. The formulation (4.1) is used for strong imposition of boundary conditions.
When imposing the boundary conditions weakly, consider
$$\begin{aligned} \begin{array}{rclll} U_t + {\bar{A}} U_x + {\bar{B}}U_y + {\bar{C}}U_z &{} = &{} {\bar{F}}_x + {\bar{G}}_y + {\bar{H}}_z+ L({\varSigma }(HU-g)), &{} (x,y,z) \in {\varOmega }, &{} t \ge 0 \\ U &{}=&{} f,&{} (x,y,z) \in {\varOmega },&{} t=0 \end{array} \end{aligned}$$
(4.2)
which should be interpreted in a weak sense. In (4.2), L is a lifting operator [25, 26] defined by \(\int _{{\varOmega }} \phi ^T L(\psi ) \, dx \, dy \, dz=\oint _{\partial {{\varOmega }}} \phi ^T \psi ds\) for smooth vector functions \(\phi , \psi \) and \({\varSigma }\) is an appropriate penalty matrix. The lifting operator adds a boundary term that can be chosen in order to get an energy estimate.
The first task now is to determine the boundary operator H such that the problem (4.1) using strong boundary conditions is well posed.

4.1 The Energy Method

The energy method is applied to (4.1) by multiplying with \(U^T\) and integrating over the domain \({\varOmega }\). Gauss’ theorem and integration by parts leads to
$$\begin{aligned} ||U||^2_t+2 DI_c = BT \end{aligned}$$
(4.3)
where
$$\begin{aligned} DI_c = \displaystyle \int _{{\varOmega }} \begin{bmatrix} U_x \\ U_y \\ U_z \end{bmatrix}^T \begin{bmatrix} {\bar{D}}_{11}&{\bar{D}}_{12}&{\bar{D}}_{13} \\ {\bar{D}}_{21}&{\bar{D}}_{22}&{\bar{D}}_{23} \\ {\bar{D}}_{31}&{\bar{D}}_{32}&{\bar{D}}_{33} \end{bmatrix} \begin{bmatrix} U_x \\ U_y \\ U_z \end{bmatrix} \, dx \, dy \, dz \end{aligned}$$
(4.4)
and
$$\begin{aligned} BT = - \oint _{\partial {{\varOmega }}} U^T A U-2 U^T F ds. \end{aligned}$$
(4.5)
In (4.5), \(ds=\sqrt{dx^2+dy^2+dz^2}\) is the surface element, \({\hat{n}} = (n_1,n_2,n_3)^T\) is the outward pointing unit normal on \(\partial {{\varOmega }}\), and
$$\begin{aligned} A=n_1{\bar{A}} + n_2{\bar{B}} + n_3{\bar{C}}, \quad F=n_1{\bar{F}} + n_2{\bar{G}} + n_3{\bar{H}}. \end{aligned}$$
(4.6)
Due to the incompletely parabolic character of the problem, we consider the following block structure of vectors and matrices in (4.5)
$$\begin{aligned} \begin{array}{rcl} U = \begin{bmatrix} U_1 \\ U_2 \end{bmatrix}, \quad F = \begin{bmatrix} 0 \\ F_2 \end{bmatrix}, \quad A = \begin{bmatrix} A_{11} &{} A_{12} \\ A_{12}^T &{} A_{22} \end{bmatrix}. \end{array} \end{aligned}$$
(4.7)
In (4.7), \(U_1\) is a scalar, \(U_2\) and \(F_2\) are four components long, \(A_{11}\) is a scalar, \(A_{12}\) is a \(1 \times 4\) matrix and \(A_{22}\) is a \(4 \times 4\) matrix. With these notations we can write the quadratic form in (4.5) as
$$\begin{aligned} U^T A U-2 U^T F= \begin{bmatrix} U_1 \\ U_2 \\ F_2 \end{bmatrix}^T \begin{bmatrix} A_{11}&A_{12}&0 \\ A_{12}^T&A_{22}&-I \\ 0&-I&0 \\ \end{bmatrix} \begin{bmatrix} U_1 \\ U_2 \\ F_2 \end{bmatrix}, \end{aligned}$$
(4.8)
where I is the \(4 \times 4\) identity matrix.
It is straightforward [27] to show that the dissipation term (4.4) on the left-hand-side in (4.3) is positive semi-definite. Consequently, for maximal semi-boundedness and well-posedness it remains to bound BT on the right-hand-side with a minimal number of boundary conditions [2, 21]. One needs to know (i) how many boundary conditions are required, (ii) where on \(\partial {{\varOmega }}\) to impose them and (iii) which form they should have.

4.2 The Number and Position of the Boundary Conditions

By rotating the boundary matrix in (4.5) to block diagonal form [1] we obtain
$$\begin{aligned} BT&=-\oint _{\delta {\varOmega }} \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix}^T T^T \begin{bmatrix} A_{11}&A_{12}&0 \\ A_{12}^T&A_{22}&-I \\ 0&-I&0 \\ \end{bmatrix} T \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} ds \nonumber \\&=-\oint _{\delta {\varOmega }} \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix}^T \begin{bmatrix} A_{11}&0&0 \\ 0&{\tilde{A}}_{22}&0 \\ 0&0&-({\tilde{A}}_{22})^{-1} \\ \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} ds, \end{aligned}$$
(4.9)
where \({\tilde{A}}_{22} = A_{22} - A_{12}^T (A_{11})^{-1} A_{12}\),
$$\begin{aligned} T = \begin{bmatrix} I&T_{12}&T_{13}\\ 0&I&T_{23}\\ 0&0&I \\ \end{bmatrix} = \begin{bmatrix} I&-A_{11}^{-1}A_{12}&-A_{11}^{-1}A_{12} {\tilde{A}}_{22}^{-1} \\ 0&I&{\tilde{A}}_{22}^{-1} \\ 0&0&I \\ \end{bmatrix} \end{aligned}$$
(4.10)
and
$$\begin{aligned} \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} = T^{-1} \begin{bmatrix} U_1 \\ U_2 \\ F_2 \end{bmatrix} = \begin{bmatrix} U_1 + (A_{11})^{-1} A_{12} U_2 \\ U_2 - ({\tilde{A}}_{22})^{-1} F_2 \\ F_2 \end{bmatrix}. \end{aligned}$$
(4.11)
Note that the rotation above requires that \(A_{11}\) is non-zero and \({\tilde{A}}_{22}\) is non-singular.
Since \({\tilde{A}}_{22}={\tilde{A}}_{22}^T\) we can write \({\tilde{A}}_{22}=X {\varLambda }_{22} X^T\) where \({\varLambda }_{22} = diag({\varLambda }_{22}^+,{\varLambda }_{22}^-)\) and \(X=[X_+, X_-]\) contain the positive and negative eigenvalues and the corresponding eigenvectors respectively. By using this eigen-decomposition of \({\tilde{A}}_{22}\), we obtain
$$\begin{aligned} BT= -\oint _{\delta {\varOmega }} \begin{bmatrix} w_1 \\ X_+^T w_2 \\ X_-^T w_2 \\ X_+^T w_3 \\ X_-^T w_3 \end{bmatrix}^T \begin{bmatrix} A_{11}&0&0&0&0 \\ 0&{\varLambda }_{22}^+&0&0&0 \\ 0&0&{\varLambda }_{22}^-&0&0 \\ 0&0&0&-({\varLambda }_{22}^+)^{-1}&0 \\ 0&0&0&0&-({\varLambda }_{22}^-)^{-1} \end{bmatrix} \begin{bmatrix} w_1 \\ X_+^T w_2 \\ X_-^T w_2 \\ X_+^T w_3 \\ X_-^T w_3 \end{bmatrix} \, ds. \end{aligned}$$
(4.12)
We are now ready to answer the questions (i) and (ii) posed above.
For the compressible Navier–Stokes equations, it can be shown the variables \(w_1, X_+^T w_2,X_-^T w_2,X_+^T w_3,X_-^T w_3\) are linearly independent. The number of boundary terms in the quadratic form (4.12) that can cause growth is hence equal to the sum of negative entries in \(A_{11}\), \({\varLambda }_{22}^-\) and \(-({\varLambda }_{22}^+)^{-1}\), which in turn is equal to the minimal number of boundary conditions. The number of negative entries vary only with \(A_{11}\) along the boundary \(\delta {\varOmega }\) since the total number of negative entries in \({\varLambda }_{22}^-\) and \(-({\varLambda }_{22}^+)^{-1}\) are constant and equal to the number of eigenvalues in \({\tilde{A}}_{22}\).
In the general case, the situation is similar. The energy method leds to a boundary term of quadratic nature, that cannot be limited by other terms in the energy rate. The minimal number of boundary conditions is given by the number of negative entries in the diagonalized boundary matrix provided that the corresponding transformed variables are linearly independent. With a minimal number of boundary conditions used to bound the solution, a maximally semi-bounded operator and well-posedness is obtained [2, 21]. Consequently, the quadratic form must be reduced in such a way that the new transformed variables are linearly independent.
Remark 4.1
We use the energy method and a minimal number of boundary conditions to obtain maximally semi-bounded operators and well-posedness. The classical way to determine the number of boundary conditions, is based on the Laplace transform method [24]. For an illustrative example of the relation between the Laplace transform and energy method regarding the number of boundary conditions for the incompressible Navier–Stokes equations, see [28].
Remark 4.2
In the compressible Navier–Stokes equations, \(A_{11}= ({\bar{u}},{\bar{v}},{\bar{w}})\cdot {\hat{n}}= u_n\), where \(u_n\) is the outward pointing normal velocity on the boundary. Consequently, the compressible Navier–Stokes equations require five boundary conditions at an inflow boundary (\(u_n<0\)) and four at an outflow boundary (\(u_n>0\)). This holds independently of whether the flow is subsonic or supersonic. The fact that the number of boundary conditions for the compressible Navier–Stokes equations is independent of the speed of the flow (whether it is subsonic or supersonic), and only depends on the direction relative to the outward pointing normal, is quite different from the situation for the Euler equations. See Fig. 1 for an illustration.
Remark 4.3
In the limit of vanishing viscosity \(\epsilon \rightarrow 0\) and formally \(w_3 = F_2 = \epsilon {\tilde{F}}_2 \rightarrow 0\) in (4.12) and we are left with the number of boundary conditions for the Euler equations [1]. However, \({\tilde{F}}_2\) contains gradients [see (2.2),(4.6)] and, this limit is not known. An analysis of the scalar viscous advection equation indicate that in fact \(\epsilon {\tilde{F}}_2 \ne 0\) as \(\epsilon \rightarrow 0\). If this holds also for the Navier–Stokes equations, it means that the Euler equations are not the high Reynolds number approximation of the Navier–Stokes equations as commonly perceived.

4.3 The Form of the Boundary Conditions

We proceed by splitting (4.12) into one positive and one negative part respectively
$$\begin{aligned} \begin{array}{rcl} BT = &{} - &{} \displaystyle \oint _{\delta {\varOmega }} \begin{bmatrix} \mathbf{1 }_+ (\gamma ^+) w_1 \\ X_+^T w_2 \\ X_-^T w_{3} \end{bmatrix}^T \begin{bmatrix} \gamma ^+ &{} 0 &{} 0 \\ 0 &{} {\varLambda }_{22}^+ &{} 0 \\ 0 &{} 0 &{} -({\varLambda }_{22}^-)^{-1} \end{bmatrix} \begin{bmatrix} \mathbf{1 }_+ (\gamma ^+) w_1 \\ X_+^T w_2 \\ X_-^T w_3 \end{bmatrix} \, ds \\ &{} - &{} \displaystyle \oint _{\delta {\varOmega }} \begin{bmatrix} \mathbf{1 }_- (\gamma ^-) w_1 \\ X_-^T w_2 \\ X_+^T w_3 \end{bmatrix}^T \begin{bmatrix} \gamma ^- &{} 0 &{} 0 \\ 0 &{} {\varLambda }_{22}^- &{} 0 \\ 0 &{} 0 &{} -({\varLambda }_{22}^+)^{-1} \end{bmatrix} \begin{bmatrix} \mathbf{1 }_- (\gamma ^-) w_1 \\ X_-^T w_2 \\ X_+^T w_3 \end{bmatrix} \, ds. \end{array} \end{aligned}$$
(4.13)
In (4.13), \(\mathbf{1 }_+ (x)\) and \(\mathbf{1 }_-(x)\) are indicator functions which are 1 if x is positive or negative respectively, and zero otherwise. We have also used \(\gamma ^+ = (A_{11} + |A_{11}|)/2\) and \(\gamma ^- = (A_{11} - |A_{11}|)/2\).
To simplify the notation we introduce
$$\begin{aligned} \begin{array}{rclrcl} W^+ &{} = &{} \begin{bmatrix} \mathbf{1 }_+ (\gamma ^+) w_1 \\ X_+^T w_2 \\ X_-^T w_3 \end{bmatrix}, &{} {\varLambda }^+ &{} = &{} \begin{bmatrix} \gamma ^+ &{} 0 &{} 0 \\ 0 &{} {\varLambda }_{22}^+ &{} 0 \\ 0 &{} 0 &{} -({\varLambda }_{22}^-)^{-1} \end{bmatrix}, \\ W^- &{} = &{} \begin{bmatrix} \mathbf{1 }_- (\gamma ^-) w_1 \\ X_-^T w_2 \\ X_+^T w_3 \end{bmatrix}, &{} {\varLambda }^- &{} = &{} \begin{bmatrix} \gamma ^- &{} 0 &{} 0 \\ 0 &{} {\varLambda }_{22}^- &{} 0 \\ 0 &{} 0 &{} -({\varLambda }_{22}^+)^{-1} \end{bmatrix}. \end{array} \end{aligned}$$
(4.14)
Given the notations in (4.14), we rewrite (4.3) as
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c = - \oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ W^- \end{bmatrix}^T \begin{bmatrix} {\varLambda }^+&0 \\ 0&{\varLambda }^- \end{bmatrix} \begin{bmatrix} W^+ \\ W^- \end{bmatrix} \, ds. \end{aligned}$$
(4.15)
We are now ready to answer the question (iii) posed above. Together with the previous answers to (i–ii) we summarize the result in the following proposition.
Proposition 4.1
The general form of the boundary condition in (4.1) that bound the right hand side of (4.15) (as well as (4.12)) and lead to a maximally semi-bounded operator, well-posedness for zero boundary data and strong well-posedness for non-zero boundary data is
$$\begin{aligned} \begin{array}{rcl} W^- - R W^+ = g. \end{array} \end{aligned}$$
(4.16)
R is a matrix with the number of rows equal to the number of boundary conditions and g is given boundary data. The number of rows in R is equal to the sum of negative entries in \(A_{11}\), \({\varLambda }_{22}^-\) and \(-({\varLambda }_{22}^+)^{-1}\) in (4.12) and vary only with the sign of \(A_{11}\).
Proof
The number of negative entries in the matrix vary only with \(A_{11}\) since the total number of positive entries in \({\varLambda }_{22}^-\) and \(-({\varLambda }_{22}^+)^{-1}\) is constant and equal to the number of eigenvalues in \({\tilde{A}}_{22}\). The sign of \(A_{11}\) vary with the direction of the normal \({\hat{n}} = (n_1,n_2,n_3)^T\) along the boundary \(\delta {\varOmega }\) and the background velocity due to (4.6). The part of the proof showing that (4.16) bounds (4.15) will be given below. \(\square \)
Remark 4.4
Note the close similarity of (4.16) with the way one imposes boundary conditions for hyperbolic problems, where the ingoing characteristic variables are given by the outgoing ones together with boundary data.

4.4 Weak and Strong Boundary Conditions

The boundary conditions in terms of (i–iii) are now known and only one issue remains; they can be imposed weakly or strongly.

4.4.1 Strongly Imposed Homogeneous Boundary Conditions

The homogeneous version of the boundary condition (4.16) strongly imposed in (4.15) gives
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c= -\oint _{\delta {\varOmega }} (W^+)^T (R^T {\varLambda }^- R + {\varLambda }^+) (W^+). \end{aligned}$$
(4.17)
To get a bound on the right-hand-side of (4.17), the matrix R must satisfy
$$\begin{aligned} R^T {\varLambda }^- R + {\varLambda }^+ \ge 0. \end{aligned}$$
(4.18)
Remark 4.5
Time-integration of (4.17) completes the proof of Proposition 4.1 for strongly imposed homogeneous boundary conditions and shows that the problem (4.1) is well posed (see Definition 3.3).
The general boundary operators used in (4.16) leading to an energy estimate and a well posed problem are
$$\begin{aligned} \begin{array}{rcl} HU= & {} (H^- - R H^+)U. \end{array} \end{aligned}$$
(4.19)
The operators \(H^+\) and \(H^-\) are decomposed as
$$\begin{aligned} H^+U= & {} \left( H_0^+ + H_{D0_x}^+ \frac{\partial }{\partial x} + H_{D0_y}^+ \frac{\partial }{\partial y} + H_{D0_z}^+ \frac{\partial }{\partial z}\right) U = W^+ \nonumber \\ H^-U= & {} \left( H_0^- + H_{D0_x}^- \frac{\partial }{\partial x} + H_{D0_y}^- \frac{\partial }{\partial y} + H_{D0_z}^- \frac{\partial }{\partial z}\right) U= W^- \end{aligned}$$
(4.20)
where
$$\begin{aligned} \begin{array}{rclrcl} H^+_0 &{} = &{} \begin{bmatrix} \mathbf{1 }_+ (\gamma ^+) &{} \mathbf{1 }_+ (\gamma ^+)(A_{11})^{-1} A_{12} \\ 0 &{} X_+^T \\ 0 &{} 0 \end{bmatrix}, &{} H^-_0 &{} = &{} \begin{bmatrix} \mathbf{1 }_- (\gamma ^-) &{} \mathbf{1 }_- (\gamma ^-)(A_{11})^{-1} A_{12} \\ 0 &{} X_-^T \\ 0 &{} 0 \end{bmatrix} \\ H_{D0_x}^+ &{} = &{} \begin{bmatrix} 0 &{} 0 \\ 0 &{} -X_+^T ({\tilde{A}}_{22})^{-1} D_1 \\ 0 &{} X_-^T D_1 \\ \end{bmatrix}, &{} H_{D0_x}^- &{} = &{} \begin{bmatrix} 0 &{} 0 \\ 0 &{} -X_-^T ({\tilde{A}}_{22})^{-1} D_1 \\ 0 &{} X_+^T D_1 \\ \end{bmatrix} \\ H_{D0_y}^+ &{} = &{} \begin{bmatrix} 0 &{} 0 \\ 0 &{} -X_+^T ({\tilde{A}}_{22})^{-1} D_2 \\ 0 &{} X_-^T D_2 \\ \end{bmatrix}, &{} H_{D0_y}^- &{} = &{} \begin{bmatrix} 0 &{} 0 \\ 0 &{} -X_-^T ({\tilde{A}}_{22})^{-1} D_2 \\ 0 &{} X_+^T D_2 \\ \end{bmatrix} \\ H_{D0_z}^+ &{} = &{} \begin{bmatrix} 0 &{} 0 \\ 0 &{} -X_+^T ({\tilde{A}}_{22})^{-1} D_3 \\ 0 &{} X_-^T D_3 \\ \end{bmatrix}, &{} H_{D0_z}^- &{} = &{} \begin{bmatrix} 0 &{} 0 \\ 0 &{} -X_-^T ({\tilde{A}}_{22})^{-1} D_3 \\ 0 &{} X_+^T D_3 \\ \end{bmatrix}. \end{array} \end{aligned}$$
(4.21)
In (4.21), we used
$$\begin{aligned} \begin{aligned} D_1&= n_1 {\bar{D}}_{11} + n_2 {\bar{D}}_{21} + n_3 {\bar{D}}_{31}\\ D_2&= n_1 {\bar{D}}_{12} + n_2 {\bar{D}}_{22} + n_3 {\bar{D}}_{32}\\ D_3&= n_1 {\bar{D}}_{13} + n_2 {\bar{D}}_{23} + n_3 {\bar{D}}_{33}. \end{aligned} \end{aligned}$$
(4.22)
The boundary operators in (4.19)–(4.21) are obtained by combining (4.11), (4.14) and (4.16).
Remark 4.6
Strongly imposed boundary conditions are characterized by the fact that some of the variables in the boundary terms are replaced by others. In (4.17) for example, only \(W^+\) is present.

4.4.2 Weakly Imposed Homogeneous Boundary Conditions

By imposing the homogeneous boundary condition (4.16) weakly using (4.2), we obtain
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c&= - \oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ W^- \end{bmatrix}^T \begin{bmatrix} {\varLambda }^+&0 \\ 0&{\varLambda }^- \end{bmatrix} \begin{bmatrix} W^+ \\ W^- \end{bmatrix}ds \nonumber \\&\quad \,+ \oint _{\delta {\varOmega }}U^T {\varSigma }(W^- - R W^+) + (U^T {\varSigma }(W^- - R W^+))^T \, ds. \end{aligned}$$
(4.23)
By introducing \({\varSigma }^-\) such that \(U^T{\varSigma }= (W^-)^T {\varSigma }^-\), (4.23) becomes
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c = - \oint _{\delta {\varOmega }}\begin{bmatrix} W^+ \\ W^- \end{bmatrix}^T \begin{bmatrix} {\varLambda }^+&R^T ({\varSigma }^-)^T \\ {\varSigma }^- R&{\varLambda }^- - {\varSigma }^- - ({\varSigma }^-)^T \end{bmatrix} \begin{bmatrix} W^+ \\ W^- \end{bmatrix} \, ds. \end{aligned}$$
(4.24)
Remark 4.7
Weakly imposed boundary conditions are characterized by the fact that all variables are present in the boundary terms. In (4.24) for example, both \(W^+\) and \(W^-\) are present.
The choice \({\varSigma }^- = {\varLambda }^-\) leads to \(U^T {\varSigma }= (W^-)^T {\varLambda }^-\) and the final penalty matrix
$$\begin{aligned} {\varSigma }= (H^-)^T {\varLambda }^-. \end{aligned}$$
(4.25)
By using (4.18), the energy rate (4.24) can now be written as
$$\begin{aligned} \begin{array}{rcl} \left\| U \right\| _t^2 + 2 DI_c = &{} - &{} \displaystyle \oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ W^- \end{bmatrix}^T \begin{bmatrix} {\varLambda }^+ &{} R^T {\varLambda }^- \\ {\varLambda }^- R &{} -{\varLambda }^- \end{bmatrix} \begin{bmatrix} W^+ \\ W^- \end{bmatrix} \, ds \\ = &{} - &{}\displaystyle \oint _{\delta {\varOmega }} (W^+)^T (R^T {\varLambda }^- R + {\varLambda }^+) (W^+)\, ds \\ &{} + &{}\displaystyle \oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ W^- \end{bmatrix}^T \begin{bmatrix} - R^T {\varLambda }^- R &{} R^T {\varLambda }^- \\ {\varLambda }^- R &{} -{\varLambda }^- \end{bmatrix} \begin{bmatrix} W^+ \\ W^- \end{bmatrix} \, ds \\ = &{} - &{}\displaystyle \oint _{\delta {\varOmega }} (W^+)^T (R^T {\varLambda }^- R + {\varLambda }^+) (W^+) \, ds \\ &{} + &{}\displaystyle \oint _{\delta {\varOmega }} (W^- -R W^+)^T {\varLambda }^- (W^- - R W^+) \, ds, \end{array} \end{aligned}$$
(4.26)
where the right-hand-side is negative semi-definite if (4.18) holds.
Remark 4.8
Time-integration of (4.26) completes the proof of Proposition 4.1 for weakly imposed homogeneous boundary conditions and shows that the problem (4.2) is well posed (see Definition 3.3).
Remark 4.9
The energy estimate (4.26) shows that a weak imposition of well-posed homogeneous boundary conditions produces the strong energy rate with an additional term \( \oint _{\delta {\varOmega }} (W^- -R W^+)^T {\varLambda }^- (W^- - R W^+) \, ds\) that is proportional to the boundary condition. A similar, dissipative term will appear in the discrete approximation.

4.4.3 Strongly Imposed Non-homogeneous Boundary Conditions

The boundary conditions (4.16) strongly imposed in (4.15) leads to
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c = -\oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ g \end{bmatrix}^T \begin{bmatrix} R^T {\varLambda }^- R + {\varLambda }^+&R^T {\varLambda }^- \\ {\varLambda }^- R&{\varLambda }^- \end{bmatrix} \begin{bmatrix} W^+ \\ g \end{bmatrix} \, ds. \end{aligned}$$
(4.27)
We can now add and subtract \(g^T G g\) where G is a positive semi-definite bounded matrix [29] to obtain
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c =&-\displaystyle \oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ g \end{bmatrix}^T \begin{bmatrix} R^T {\varLambda }^- R + {\varLambda }^+&R^T {\varLambda }^- \\ {\varLambda }^- R&G \end{bmatrix} \begin{bmatrix} W^+ \\ g \end{bmatrix} \, ds \nonumber \\&+ \displaystyle \oint _{\delta {\varOmega }}g^T (G + \vert {\varLambda }^-\vert ) g \, ds. \end{aligned}$$
(4.28)
The choice
$$\begin{aligned} G \ge ({\varLambda }^- R) (R^T {\varLambda }^- R + {\varLambda }^+)^{-1} ({\varLambda }^- R)^T, \end{aligned}$$
(4.29)
bounds the right-hand-side of (4.28). In order for condition (4.29) to make sense, we need to sharpen (4.18) to
$$\begin{aligned} R^T {\varLambda }^- R + {\varLambda }^+ > 0. \end{aligned}$$
(4.30)
Remark 4.10
Time-integration of (4.28) completes the proof of Proposition 4.1 for strongly imposed non-homogeneous boundary conditions and show that the problem (4.1) is strongly well posed (see Definition 3.4).
Remark 4.11
If (4.30) holds, then the choice (4.29) can always be made, and we can estimate the solution in terms of the boundary data which leads to a strongly well-posed problem. If condition (4.18) holds, but not (4.30), we get an energy estimate for zero boundary data and we have a well posed problem [2]. Note also that even if \({\varLambda }^+\) is singular, which is the case for the Navier–Stokes equations at a solid boundary, G can be chosen in a similar way as in (4.29) by separating out the zero eigenvalue.
Remark 4.12
The general form (4.16) can be used to formulate common standard boundary conditions for initial boundary value problems, such as for example the no-slip conditions for the compressible and incompressible Navier–Stokes equations and the specification of electric and magnetic fields for the Maxwells equations. The formulation (4.16) can also be used to check if the boundary conditions in a practical case leads to a well posed or strongly well posed problem by identifying R and verify that it satisfies conditions (4.18) and (4.30) respectively. Finally, (4.16), (4.18) and (4.30) can be used to find previously unknown well posed boundary conditions.

4.4.4 Weakly Imposed Non-homogeneous Boundary Conditions

The boundary conditions (4.16) imposed weakly using (4.2) yields
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c =&- \oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ W^- \end{bmatrix}^T \begin{bmatrix} {\varLambda }^+&0 \\ 0&{\varLambda }^- \end{bmatrix} \begin{bmatrix} W^+ \\ W^- \end{bmatrix}\, ds \\&+ \oint _{\delta {\varOmega }} U^T {\varSigma }(W^- - R W^+ - g)+(U^T {\varSigma }(W^- - R W^+ - g))^T \, ds, \nonumber \end{aligned}$$
(4.31)
where \({\varSigma }\) is the penalty matrix. Following the analysis above, we choose \({\varSigma }\) such that \(U^T{\varSigma }= (W^-)^T {\varLambda }^-\), and insert this into (4.31) to find
$$\begin{aligned} \left\| U \right\| _t^2 + 2 DI_c = -\oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ W^- \\ g \end{bmatrix}^T \underbrace{\begin{bmatrix} {\varLambda }^+&R^T {\varLambda }^-&0 \\ {\varLambda }^- R&-{\varLambda }^-&{\varLambda }^- \\ 0&{\varLambda }^-&0 \end{bmatrix}}_{M} \begin{bmatrix} W^+ \\ W^- \\ g \end{bmatrix} \, ds. \end{aligned}$$
(4.32)
The matrix M in (4.32) can be divided into three parts and rewritten as
$$\begin{aligned} M = \begin{bmatrix} -R^T {\varLambda }^- R&R^T {\varLambda }^-&- R^T {\varLambda }^- \\ {\varLambda }^- R&-{\varLambda }^-&{\varLambda }^- \\ -{\varLambda }^- R&{\varLambda }^-&- {\varLambda }^- \end{bmatrix} + \begin{bmatrix} R^T {\varLambda }^- R + {\varLambda }^+&0&R^T {\varLambda }^- \\ 0&0&0 \\ {\varLambda }^- R&0&G \end{bmatrix} + \begin{bmatrix} 0&0&0 \\ 0&0&0 \\ 0&0&-G + {\varLambda }^- \end{bmatrix}. \end{aligned}$$
The second matrix above is positive semi-definite by the choice of G in (4.29), while the third matrix leads to a bound in terms of the data. These two matrices correspond exactly to the result obtained for strong boundary conditions in (4.28).
The first matrix in M, which is due to the use of weak boundary conditions, can be rewritten as
$$\begin{aligned} \begin{bmatrix} R&0&0 \\ 0&I&0 \\ 0&0&I \end{bmatrix}^T (C_0 \otimes {\varLambda }^-) \begin{bmatrix} R&0&0 \\ 0&I&0 \\ 0&0&I \end{bmatrix}, \quad C_0 = \begin{bmatrix} -1&+1&-1 \\ +1&-1&+1 \\ -1&+1&-1 \end{bmatrix}, \end{aligned}$$
(4.33)
where \(\otimes \) denotes the Kronecker product [30]. The matrix \(C_0\) is negative semi-definite with eigenvalues \(-3,0,0\) and hence the right-hand-side of (4.32) is bounded by data.
The difference between the estimate (4.28) obtained by strong imposition of boundary conditions and the estimate (4.32) obtained by a weak imposition is the term
$$\begin{aligned} {\tilde{R}}= -\oint _{\delta {\varOmega }} \begin{bmatrix} W^+ \\ W^- \\ g \end{bmatrix}^T \begin{bmatrix} R&0&0 \\ 0&I&0 \\ 0&0&I \end{bmatrix}^T (C_0 \otimes {\varLambda }^-) \begin{bmatrix} R&0&0 \\ 0&I&0 \\ 0&0&I \end{bmatrix} \begin{bmatrix} W^+ \\ W^- \\ g \end{bmatrix} \, ds, \end{aligned}$$
(4.34)
on the right-hand-side in (4.32). We can expand the term \({\tilde{R}}\) by using
$$\begin{aligned} C_0 = X {\varGamma }X^T, \quad X = \frac{1}{\sqrt{3}} \begin{bmatrix} -1&+1&+1 \\ +1&+1&-1 \\ -1&0&-2 \end{bmatrix}, \quad {\varGamma }= \begin{bmatrix} -3&0&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix}, \end{aligned}$$
(4.35)
and find
$$\begin{aligned} {\tilde{R}}=&-\displaystyle \oint _{\delta {\varOmega }} \begin{bmatrix} R W^+ \\ W^- \\ g \end{bmatrix}^T (X {\varGamma }X^T \otimes {\varLambda }^-) \begin{bmatrix} R W^+ \\ W^- \\ g \end{bmatrix} \, ds \nonumber \\ =&\displaystyle -\oint _{\delta {\varOmega }} \begin{bmatrix} W^- - R W^+ - g \\ RW^+ + W^- \\ RW^+ - W^- + 2 g \end{bmatrix}^T \begin{bmatrix} -1&0&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix} \otimes {\varLambda }^- \begin{bmatrix} W^- - R W^+ - g \\ RW^+ + W^- \\ RW^+ - W^- + 2 g \end{bmatrix} \, ds \nonumber \\ =&\displaystyle +\oint _{\delta {\varOmega }} (W^- - R W^+ - g)^T {\varLambda }^- (W^- - R W^+ - g) \, ds \le 0. \end{aligned}$$
(4.36)
Remark 4.13
Time-integration of (4.32) completes the proof of Proposition 4.1 for weakly imposed non-homogeneous boundary conditions and show that the problem (4.2) is strongly well posed (see Definition 3.4).
Remark 4.14
Just as in the case for homogeneous boundary conditions, the additional term \(R=\oint _{\delta {\varOmega }}(W^- - R W^+ - g)^T {\varLambda }^- (W^- - R W^+ - g)\) in the weak energy rate is proportional to the boundary condition. A similar non-zero dissipative term will appear in the discrete approximation.

5 The Semi-discrete Approximation

To exemplify the straightforward way to stability once the analysis for the continuous problem is done, we employ a finite difference approximation on summation-by-parts (SBP) form with weakly imposed boundary conditions using the simultaneous approximation term (SAT) technique [22].
Remark 5.1
The specific discretization technique used here is not important, it is chosen as an example. We stress that any discretization technique that can be formulated on SBP form such as for example finite difference [7, 8], finite volume [9, 10], spectral element [11, 12], discontinuous Galerkin [13, 14] and flux reconstruction schemes [15, 16] will lead to the same analysis and principal results.

5.1 The Numerical Scheme

The semi-discrete SBP-SAT approximation of (4.1) on a cubic domain with weakly imposed boundary conditions is
$$\begin{aligned}&V_t + (D_x \otimes I_y \otimes I_z \otimes A)V + (I_x \otimes D_y \otimes I_z \otimes B)V + (I_x \otimes I_y \otimes D_z \otimes C)V \nonumber \\&\quad = (D_x \otimes I_y \otimes I_z \otimes I_5)F + (I_x \otimes D_y \otimes I_z \otimes I_5)G + (I_x \otimes I_y \otimes D_z \otimes I_5)H \nonumber \\&\quad \quad + \sum _{n} PEN_n \nonumber \\&\quad V(0) = f. \end{aligned}$$
(5.1)
The weak penalty terms \(\sum _{n} PEN_n\) sum over all six faces of the cube. The discrete solution \(V_{ijk}(t) \approx U(x_i, y_j, z_k,t)\) is arranged as
$$\begin{aligned} V = \begin{bmatrix} V_0 \\ V_1 \\ \vdots \\ V_i \\ \vdots \\ V_{N} \end{bmatrix}\quad V_i = \begin{bmatrix} V_{i0} \\ V_{i1} \\ \vdots \\ V_{ij} \\ \vdots \\ V_{iM} \end{bmatrix} \quad V_{ij} = \begin{bmatrix} V_{ij0} \\ V_{ij1} \\ \vdots \\ V_{ijkk} \\ \vdots \\ V_{ijL} \end{bmatrix}, \quad V_{ijk}= \begin{bmatrix} V_{1} \\ V_{2} \\ V_{3}\\ V_{4} \\ V_{5} \end{bmatrix}_{ijk} \approx U(x_i,y_j, z_k,t). \end{aligned}$$
There are \(N+1, M+1, L+1\) gridpoints in the xyz direction respectively. The matrices \(\bar{A}\), \(\bar{B}\) and \(\bar{C}\) are matrices given in (4.1).
Out of the six penalty terms \(PEN_n\) [corresponding to the lifting operator L in (4.2)] on each side of the cube, only the boundary condition at \(x = 1\) of the form
$$\begin{aligned} PEN_{N}=(E_{N} P_x^{-1} {\varSigma }\otimes I_y \otimes I_z \otimes I_5)(({\tilde{H}}^- - {\tilde{R}} {\tilde{H}}^+) V - e_{N}\otimes g) \end{aligned}$$
(5.2)
is considered. The discrete representation of the vectors \(\bar{F}\), \(\bar{G}\) and \(\bar{H}\) in (2.2) are
$$\begin{aligned} \begin{aligned} {\tilde{F}}&= ({\tilde{I}} \otimes {\bar{D}}_{11}) V_x + ({\tilde{I}} \otimes {\bar{D}}_{12}) V_y + ({\tilde{I}} \otimes {\bar{D}}_{13}) V_z \\ \tilde{G}&= ({\tilde{I}} \otimes {\bar{D}}_{21}) V_x + ({\tilde{I}} \otimes {\bar{D}}_{22}) V_y + ({\tilde{I}} \otimes {\bar{D}}_{23}) V_z \\ \tilde{H}&= ({\tilde{I}} \otimes {\bar{D}}_{31}) V_x + ({\tilde{I}} \otimes {\bar{D}}_{32}) V_y + ({\tilde{I}} \otimes {\bar{D}}_{33}) V_z, \end{aligned} \end{aligned}$$
(5.3)
where we, with a slight abuse of notation, have used \({\tilde{I}} = (I_x \otimes I_y \otimes I_z)\) and
$$\begin{aligned} V_{x} = (D_x \otimes I_y \otimes I_z \otimes I_5)V,\quad V_{y} = (I_y \otimes D_y \otimes I_z \otimes I_5)V, \quad V_{z} = (I_z \otimes I_y \otimes D_z \otimes I_5)V. \end{aligned}$$
The difference operators are on summation-by-parts form (SBP) [22], i.e. \(D_{x,y,z} = P_{x,y,z}^{-1}Q_{x,y,z}\) where \(P_{x,y,z}=P_{x,y,z}^T>0\), \(Q_{x,y,z}+Q_{x,y,z}^T=diag(-1,0...,0,+1)\). \(E_N\) is a matrix where the only non-zero element \((N+1\),\(N+1)\) is one. \(I_x\), \(I_y\), \(I_z\) and \(I_5\) are identity matrices of appropriate sizes and \(e_{N} = [0,\dots 0,1]\) is of length \(N+1\).
The continuous boundary operator in (4.19) is \(H^- - R H^+\) where both \(H^+\) and \(H^-\) are partioned matrix operators of Robin type, see (4.21). To construct the corresponding discrete operators we use the same partitioning and define the discrete versions of \(H^+\) and \(H^-\) as
$$\begin{aligned} \begin{aligned} {\tilde{H}}^+&= (I_x \otimes I_y \otimes I_z \otimes H_0^+) + (D_x \otimes I_y \otimes I_z \otimes H_{D0x}^+) \\&\quad \,+\, (I_x \otimes D_y \otimes I_z \otimes H_{D0y}^+) + (I_x \otimes I_y \otimes D_z \otimes H_{D0z}^+),\\ {\tilde{H}}^-&= (I_x \otimes I_y \otimes I_z \otimes H_0^-) + (D_x \otimes I_y \otimes I_z \otimes H_{D0x}^-) \\&\quad \,+\, (I_x \otimes D_y \otimes I_z \otimes H_{D0y}^-) + (I_x \otimes I_y \otimes D_z \otimes H_{D0z}^-). \end{aligned} \end{aligned}$$
(5.4)

5.2 The Energy Method

We mimic the analysis of the continuous problem above, but limit ourselves to weak boundary conditions.

5.2.1 Weakly Imposed Homogeneous Boundary Conditions

The discrete energy method (multiply with \(V^T(P_x \otimes P_y \otimes P_z \otimes I_5)\) from the left and add the transpose) applied to (5.1) with \(g=0\) gives
$$\begin{aligned} \frac{d}{dt} \left\| V \right\| _{P_{xyz}}^2 + 2 DI_d= & {} -\,V^T(E_N \otimes P_y \otimes P_z \otimes A) V + V^T (E_N \otimes P_y \otimes P_z \otimes I_5){\tilde{F}} \nonumber \\&+ \, {\tilde{F}}^T (E_N \otimes P_y \otimes P_z \otimes I_5)V \nonumber \\&+ \, V^T {\tilde{{\varSigma }}} (E_N \otimes P_y \otimes P_z \otimes I_5) ({\tilde{H}}^- - {\tilde{R}} {\tilde{H}}^+)V\nonumber \\&+ \, V^T ({\tilde{H}}^- - {\tilde{R}} {\tilde{H}}^+)^T (E_N \otimes P_y \otimes P_z \otimes I_M){\tilde{{\varSigma }}}^T V, \end{aligned}$$
(5.5)
where
$$\begin{aligned} \begin{array}{rcl} DI_d &{} = &{} \begin{bmatrix} V_x \\ V_y \\ V_z \end{bmatrix}^T P_{xyz}\begin{bmatrix} {\tilde{I}} \otimes {\bar{D}}_{11} &{} {\tilde{I}} \otimes {\bar{D}}_{12} &{} {\tilde{I}} \otimes {\bar{D}}_{13} \\ {\tilde{I}} \otimes {\bar{D}}_{21} &{} {\tilde{I}} \otimes {\bar{D}}_{22} &{} {\tilde{I}} \otimes {\bar{D}}_{23} \\ {\tilde{I}} \otimes {\bar{D}}_{31} &{} {\tilde{I}} \otimes {\bar{D}}_{32} &{} {\tilde{I}} \otimes {\bar{D}}_{33} \\ \end{bmatrix} \begin{bmatrix} V_x \\ V_y \\ V_z \end{bmatrix} \\ &{} = &{} \begin{bmatrix} V_x \\ V_y \\ V_z \end{bmatrix}^T P_{xyz} \left( {\varPsi }^T \left( \begin{bmatrix} {\bar{D}}_{11} &{} {\bar{D}}_{12} &{} {\bar{D}}_{13} \\ {\bar{D}}_{21} &{} {\bar{D}}_{22} &{} {\bar{D}}_{23} \\ {\bar{D}}_{31} &{} {\bar{D}}_{32} &{} {\bar{D}}_{33} \end{bmatrix} \otimes {\tilde{I}} \right) {\varPsi }\right) \begin{bmatrix} V_x \\ V_y \\ V_z \end{bmatrix} > 0. \end{array} \end{aligned}$$
(5.6)
In (5.6), we have used that the Kronecker product [30] is even permutation similar (with the permutation matrix \({\varPsi }\)) for square matrices. Note that \(DI_d\) mimics the continuous dissipation \(DI_c\) and is positive semi-definite. We have also used the notation \(P_{xyz} = (P_x \otimes P_y \otimes P_z \otimes I_5)\) and \({\tilde{R}} = ({\tilde{I}} \otimes R)\).
Recall that \((H^- - R H^+)U = W^- - R W^+\) in the continuous case. The corresponding discrete relation, see (5.4) reads
$$\begin{aligned} ({\tilde{H}}^- - {\tilde{R}} {\tilde{H}}^+)V = {\tilde{W}}^- -{\tilde{R}} {\tilde{W}}^+. \end{aligned}$$
(5.7)
By expanding the fluxes defined in (5.3) and subsequently diagonalizing the resulting matrix, we obtain
$$\begin{aligned} \begin{array}{rcl} \frac{d}{dt} \left\| V \right\| _{P_{xyz}}^2 + 2 DI_d =&{} - &{} \begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \end{bmatrix}^T_N P_{yz} \otimes \begin{bmatrix} {\varLambda }^+ &{} 0 \\ 0 &{} {\varLambda }^- \end{bmatrix} \begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \end{bmatrix}_N \\ &{} &{} \\ &{} + &{} V^T {\tilde{{\varSigma }}} (E_N \otimes P_y \otimes P_z \otimes I_5) ({\tilde{W}}^- - {\tilde{R}} {\tilde{W}}^+) \\ &{} &{} \\ &{} + &{} ({\tilde{W}}^- - {\tilde{R}} {\tilde{W}}^+)^T (E_N \otimes P_y \otimes P_z \otimes I_5) {\tilde{{\varSigma }}}^T V, \end{array} \end{aligned}$$
(5.8)
which is the discrete version of (4.23). In (5.8), \(P_{yz}\) denotes \(P_y \otimes P_z\) and only the contribution at \(x=1\) is considered.
To mimic the continuous setting we let \(V^T {\tilde{{\varSigma }}} = ({\tilde{W}}^-)^T {\tilde{{\varSigma }}}^-\) which implies \({\tilde{{\varSigma }}} = ({\tilde{H}}^-)^T {\tilde{{\varSigma }}}^-\). The additional choice \({\tilde{{\varSigma }}}^-=({\tilde{I}} \otimes {\varSigma }^-)\) gives
$$\begin{aligned} \frac{d}{dt} \left\| V \right\| _{P_{xyz}}^2 + 2 DI_d = - \begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \end{bmatrix}^T_N P_{yz} \otimes \begin{bmatrix} {\varLambda }^+&R^T {\tilde{{\varSigma }}}^-\\ ({\tilde{{\varSigma }}}^-)^T R&{\varLambda }^- - {\tilde{{\varSigma }}}^- - ({\tilde{{\varSigma }}}^-)^T \end{bmatrix} \begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \end{bmatrix}_N \end{aligned}$$
(5.9)
which corresponds to (4.24) in the continuous case. As in the continuous case we let \({\varSigma }^- = {\varLambda }^-\) which yields
$$\begin{aligned} {\tilde{{\varSigma }}} = ({\tilde{H}}^-)^T ({\tilde{I}} \otimes {\varLambda }^-) \end{aligned}$$
(5.10)
corresponding to (4.25) and the energy rate
$$\begin{aligned} \begin{aligned} \frac{d}{dt} \left\| V \right\| _{P_{xyz}}^2 + 2 DI_d&= -({\tilde{W}}_N^+)^T (P_{yz} \otimes (R^T {\varLambda }^- R + {\varLambda }^+)) ({\tilde{W}}_N^+) \\&\quad \,+\,({\tilde{W}}_N^--R {\tilde{W}}_N^+)^T (P_{yz} \otimes {\varLambda }^-) ({\tilde{W}}^-_N - R {\tilde{W}}_N^+) \end{aligned} \end{aligned}$$
(5.11)
which correspond to (4.26). The second term in (5.11) adds a small amount of dissipation.
We summarize the result in the following Proposition.
Proposition 5.1
The semi-discrete approximation (5.1) of (4.1) with homogeneous weak boundary conditions and penalty matrix (5.10) lead to a semi-bounded operator and a stable approximation.
Proof
Time-integration of (5.11) lead to an estimate of the form (3.16). \(\square \)
Since the semi-discrete energy rate (5.11) mimics the continuous energy rate (4.26) term by term and will converge to the continuous solution, we can also state
Proposition 5.2
The semi-discrete approximation (5.1) of (4.1) with the penalty matrix (5.10) is strictly stable.
Remark 5.2
The derivation in this section is completely analogous to the continuous one above. In fact, the boundary conditions and penalty matrices are already derived in the analysis of the continuous problem.

5.2.2 Weakly Imposed Non-homogeneous Boundary Conditions

By using the same procedure as for the homogeneous case but with non-zero data, we end up with
$$\begin{aligned} \begin{array}{rcl} \frac{d}{dt} \left\| V \right\| _{P_{xyz}}^2 + 2 DI_d &{} = &{} -\begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \\ g \end{bmatrix}_N^T P_{yz} \otimes \underbrace{\begin{bmatrix} {\varLambda }^+ &{} R^T {\varLambda }^- &{} 0 \\ {\varLambda }^- R &{} -{\varLambda }^- &{} {\varLambda }^- \\ 0 &{} {\varLambda }^- &{} 0 \end{bmatrix}}_{M} \begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \\ g \end{bmatrix}_N \end{array} \end{aligned}$$
(5.12)
where M in (5.12) is exactly the same matrix as in (4.32). Consequently, the continuous analysis leads directly to strong stability. The discrete energy estimate is similar to the continuous one, but the additional term
$$\begin{aligned} \begin{array}{rcl} {\tilde{R}}_d&{} =&{} \begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \\ g \end{bmatrix}_N^T P_{yz} \otimes \begin{bmatrix} -R^T {\varLambda }^- R &{} R^T {\varLambda }^- &{} -R^T {\varLambda }^- \\ {\varLambda }^- R &{} -{\varLambda }^- &{} {\varLambda }^- \\ -{\varLambda }^-R &{} {\varLambda }^- &{} -{\varLambda }^- \end{bmatrix} \begin{bmatrix} {\tilde{W}}^+ \\ {\tilde{W}}^- \\ g \end{bmatrix}_N \\ &{} &{} \\ &{} =&{} - ({\tilde{W}}^-_N - R {\tilde{W}}^+_N - \tilde{g})^T (P_{yz} \otimes {\varLambda }^-) ({\tilde{W}}^-_N - R_N {\tilde{W}}^+_N - \tilde{g}) \end{array} \end{aligned}$$
(5.13)
corresponding to \({\tilde{R}}\) in (4.36) adds a small amount of dissipation.
We summarize the result in the following Proposition.
Proposition 5.3
The semi-discrete approximation (5.1) of (4.1) with non-homogeneous weak boundary conditions and penalty matrix (5.10) is strongly stable.
Proof
Time-integration of (5.12) lead to an estimate of the form (3.17). \(\square \)
Remark 5.3
Just as in the preceding section on weak homogeneous boundary conditions, the derivation in the semi-discrete case is analogous to the continuous one.

6 Conclusions

A complete roadmap for how to obtain well posed initial boundary value problems and related stable approximations for smooth problems have been presented. The procedure was exemplified by the time-dependent compressible Navier–Stokes equations. The number of boundary conditions, where to impose them and their form have been derived. The procedure is based on the energy method and generalize the characteristic boundary procedure for the Euler equations.
The derived boundary conditions can be imposed weakly or strongly and they lead to well posed or strongly well posed problems if the conditions (4.18) and (4.30) are satisfied respectively. These conditions can be used to verify if the choice of boundary conditions in a practical case leads to a well posed or strongly well posed problem, and later to the possibility of a stable scheme.
It has also been shown that the weak boundary procedures in the well-posedness analysis lead directly to stability, strong stability and strict stability of the numerical approximation if schemes on SBP form are used. The same conditions as in the continuous problem are required. The boundary conditions and penalty matrices were derived in the analysis of the continuous problem. Almost no additional derivations are necessary.
The analysis of the time-dependent compressible Navier–Stokes equations in this paper is completely general and can without difficulty be extended to any coupled system of partial differential equations posed as an initial boundary value problem coupled with a numerical method on summation-by parts form.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Literatur
1.
Zurück zum Zitat Nordström, J., Svärd, M.: Well posed boundary conditions for the Navier–Stokes equation. SIAM J. Numer. Anal. 43, 1231–1255 (2005)MathSciNetCrossRefMATH Nordström, J., Svärd, M.: Well posed boundary conditions for the Navier–Stokes equation. SIAM J. Numer. Anal. 43, 1231–1255 (2005)MathSciNetCrossRefMATH
2.
Zurück zum Zitat Gustafsson, B., Kreiss, H.-O., Oliger, J.: Time Dependent Problems and Difference Methods. Wiley, New York (1995)MATH Gustafsson, B., Kreiss, H.-O., Oliger, J.: Time Dependent Problems and Difference Methods. Wiley, New York (1995)MATH
3.
4.
Zurück zum Zitat Strikwerda, J.C.: Initial boundary value problems for incompletely parabolic systems. Commun. Pure Appl. Math. 30, 797–822 (1977)MathSciNetCrossRefMATH Strikwerda, J.C.: Initial boundary value problems for incompletely parabolic systems. Commun. Pure Appl. Math. 30, 797–822 (1977)MathSciNetCrossRefMATH
5.
7.
Zurück zum Zitat Carpenter, M.H., Nordström, J., Gottlieb, D.: A stable and conservative interface treatment of arbitrary spatial accuracy. J. Comput. Phys. 148, 341–365 (1999)MathSciNetCrossRefMATH Carpenter, M.H., Nordström, J., Gottlieb, D.: A stable and conservative interface treatment of arbitrary spatial accuracy. J. Comput. Phys. 148, 341–365 (1999)MathSciNetCrossRefMATH
8.
Zurück zum Zitat Svärd, M., Carpenter, M.H., Nordström, J.: A stable high-order finite difference scheme for the compressible Navier–Stokes equations, far-field boundary conditions. J. Comput. Phys. 225, 1020–1038 (2007)MathSciNetCrossRefMATH Svärd, M., Carpenter, M.H., Nordström, J.: A stable high-order finite difference scheme for the compressible Navier–Stokes equations, far-field boundary conditions. J. Comput. Phys. 225, 1020–1038 (2007)MathSciNetCrossRefMATH
9.
Zurück zum Zitat Nordström, J., Forsberg, K., Adamsson, C., Eliasson, P.: Finite volume methods, unstructured meshes and strict stability. Appl. Numer. Math. 48, 453–473 (2003)MathSciNetCrossRefMATH Nordström, J., Forsberg, K., Adamsson, C., Eliasson, P.: Finite volume methods, unstructured meshes and strict stability. Appl. Numer. Math. 48, 453–473 (2003)MathSciNetCrossRefMATH
10.
Zurück zum Zitat Nordström, J., Eriksson, S., Eliasson, P.: Weak and strong wall boundary procedures and convergence to steady-state of the Navier–Stokes equations. J. Comput. Phys. 231, 4867–4884 (2012)MathSciNetCrossRefMATH Nordström, J., Eriksson, S., Eliasson, P.: Weak and strong wall boundary procedures and convergence to steady-state of the Navier–Stokes equations. J. Comput. Phys. 231, 4867–4884 (2012)MathSciNetCrossRefMATH
12.
Zurück zum Zitat Carpenter, M.H., Fisher, T.C., Nielsen, E.J., Frankel, S.H.: Entropy stable spectral collocation schemes for the Navier–Stokes equations: discontinuous interfaces. SIAM J. Sci. Comput. 36, B835–B867 (2014)MathSciNetCrossRefMATH Carpenter, M.H., Fisher, T.C., Nielsen, E.J., Frankel, S.H.: Entropy stable spectral collocation schemes for the Navier–Stokes equations: discontinuous interfaces. SIAM J. Sci. Comput. 36, B835–B867 (2014)MathSciNetCrossRefMATH
13.
Zurück zum Zitat Hesthaven, J.S., Gottlieb, D.: A stable penalty method for the compressible Navier–Stokes equations: I. Open boundary conditions. SIAM J. Sci. Comput. 17, 579–612 (1996)MathSciNetCrossRefMATH Hesthaven, J.S., Gottlieb, D.: A stable penalty method for the compressible Navier–Stokes equations: I. Open boundary conditions. SIAM J. Sci. Comput. 17, 579–612 (1996)MathSciNetCrossRefMATH
14.
Zurück zum Zitat Gassner, G.J.: A skew-symmetric discontinuous Galerkin spectral element discretization and its relation to sbp-sat finite difference methods. SIAM J. Sci. Comput. 35, A1233–A1253 (2013)MathSciNetCrossRefMATH Gassner, G.J.: A skew-symmetric discontinuous Galerkin spectral element discretization and its relation to sbp-sat finite difference methods. SIAM J. Sci. Comput. 35, A1233–A1253 (2013)MathSciNetCrossRefMATH
15.
Zurück zum Zitat Huynh, H.: A flux reconstruction approach to high-order schemes including discontinuous Galerkin methods. In: 18th AIAA computational fluid dynamics conference, Miami, FL, 25–28 June 2007 Huynh, H.: A flux reconstruction approach to high-order schemes including discontinuous Galerkin methods. In: 18th AIAA computational fluid dynamics conference, Miami, FL, 25–28 June 2007
16.
Zurück zum Zitat Castonguay, P., Williams, D.M., Vincent, P.E., Jameson, A.J.: Energy stable flux reconstruction schemes for advection–diffusion problems. Comput. Methods Appl. Mech. Eng. 267, 400–417 (2013)MathSciNetCrossRefMATH Castonguay, P., Williams, D.M., Vincent, P.E., Jameson, A.J.: Energy stable flux reconstruction schemes for advection–diffusion problems. Comput. Methods Appl. Mech. Eng. 267, 400–417 (2013)MathSciNetCrossRefMATH
17.
Zurück zum Zitat Tadmor, E., Zhong, W.: Entropy stable approximations of Navier–Stokes equations with no aritificial numerical viscosity. J. Hyperbolic Differ. Equ. 3(3), 529–559 (2006)MathSciNetCrossRefMATH Tadmor, E., Zhong, W.: Entropy stable approximations of Navier–Stokes equations with no aritificial numerical viscosity. J. Hyperbolic Differ. Equ. 3(3), 529–559 (2006)MathSciNetCrossRefMATH
18.
Zurück zum Zitat Parsani, M., Carpenter, M.H., Nielsen, E.J.: Entropy stable wall boundary conditions for the three-dimensional compressible Navier–Stokes equations. J. Comput. Phys. 292(1), 88–113 (2015)MathSciNetCrossRefMATH Parsani, M., Carpenter, M.H., Nielsen, E.J.: Entropy stable wall boundary conditions for the three-dimensional compressible Navier–Stokes equations. J. Comput. Phys. 292(1), 88–113 (2015)MathSciNetCrossRefMATH
19.
Zurück zum Zitat Kozdon, J.E., Dunham, E.M., Nordström, J.: Simulation of dynamic earthquake ruptures in complex geometries using high-order finite difference methods. J. Sci. Comput. 55(1), 92–124 (2013)MathSciNetCrossRefMATH Kozdon, J.E., Dunham, E.M., Nordström, J.: Simulation of dynamic earthquake ruptures in complex geometries using high-order finite difference methods. J. Sci. Comput. 55(1), 92–124 (2013)MathSciNetCrossRefMATH
20.
Zurück zum Zitat Abarbanel, S., Gottlieb, D.: Optimal time splitting for two- and three-dimensional Navier–Stokes equations with mixed derivatives. J. Comput. Phys. 41, 1–33 (1981)MathSciNetCrossRefMATH Abarbanel, S., Gottlieb, D.: Optimal time splitting for two- and three-dimensional Navier–Stokes equations with mixed derivatives. J. Comput. Phys. 41, 1–33 (1981)MathSciNetCrossRefMATH
21.
Zurück zum Zitat Gustafsson, B.: High Order Difference Methods for Time-Dependent PDE. Springer Series in Computational Mathematics. Springer, Berlin (2008) Gustafsson, B.: High Order Difference Methods for Time-Dependent PDE. Springer Series in Computational Mathematics. Springer, Berlin (2008)
22.
Zurück zum Zitat Svärd, M., Nordström, J.: Review of summation-by-parts schemes for initial–boundary-value problems. J. Comput. Phys. 268, 1738 (2014)MathSciNetCrossRefMATH Svärd, M., Nordström, J.: Review of summation-by-parts schemes for initial–boundary-value problems. J. Comput. Phys. 268, 1738 (2014)MathSciNetCrossRefMATH
23.
Zurück zum Zitat Kreiss, H.O., Lorenz, J.: Initial Boundary Value Problems and the Navier–Stokes Equations. Academic Press, Boston (1989)MATH Kreiss, H.O., Lorenz, J.: Initial Boundary Value Problems and the Navier–Stokes Equations. Academic Press, Boston (1989)MATH
25.
Zurück zum Zitat Sudirham, J.J., van der Vegt, J.J.W., van Damme, R.M.J.: A study on discontinuous Galerkin finite element methods for eliptic problems, Memorandum 1690. University of Twente, Faculty of EEMCS (2003) Sudirham, J.J., van der Vegt, J.J.W., van Damme, R.M.J.: A study on discontinuous Galerkin finite element methods for eliptic problems, Memorandum 1690. University of Twente, Faculty of EEMCS (2003)
26.
Zurück zum Zitat Arnold, D.N., Brezzi, F., Cockburn, B., Marini, L.D.: Unified analysis of discontinuous Galerkin methods for elliptic problems. SIAM J. Numer. Anal. 39, 1749–1779 (2001)MathSciNetCrossRefMATH Arnold, D.N., Brezzi, F., Cockburn, B., Marini, L.D.: Unified analysis of discontinuous Galerkin methods for elliptic problems. SIAM J. Numer. Anal. 39, 1749–1779 (2001)MathSciNetCrossRefMATH
27.
Zurück zum Zitat Nordström, J., Lönn, B.: Energy decay of vortices in viscous fluids: an applied mathematics view. J. Fluid Mech. 709, 593609 (2012)MathSciNetCrossRefMATH Nordström, J., Lönn, B.: Energy decay of vortices in viscous fluids: an applied mathematics view. J. Fluid Mech. 709, 593609 (2012)MathSciNetCrossRefMATH
28.
Zurück zum Zitat Nordström, J., Mattsson, K., Swanson, Charles: Boundary conditions for a divergence free velocity–pressure formulation of the Navier–Stokes equations. J. Comput. Phys. 225(1), 874–890 (2007)MathSciNetCrossRefMATH Nordström, J., Mattsson, K., Swanson, Charles: Boundary conditions for a divergence free velocity–pressure formulation of the Navier–Stokes equations. J. Comput. Phys. 225(1), 874–890 (2007)MathSciNetCrossRefMATH
29.
Zurück zum Zitat Nordström, J., Wahlsten, M.: Variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of equations. J. Comput. Phys. 82, 1–22 (2015)MathSciNetCrossRefMATH Nordström, J., Wahlsten, M.: Variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of equations. J. Comput. Phys. 82, 1–22 (2015)MathSciNetCrossRefMATH
30.
Zurück zum Zitat Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)CrossRefMATH Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)CrossRefMATH
Metadaten
Titel
A Roadmap to Well Posed and Stable Problems in Computational Physics
verfasst von
Jan Nordström
Publikationsdatum
06.10.2016
Verlag
Springer US
Erschienen in
Journal of Scientific Computing / Ausgabe 1/2017
Print ISSN: 0885-7474
Elektronische ISSN: 1573-7691
DOI
https://doi.org/10.1007/s10915-016-0303-9

Weitere Artikel der Ausgabe 1/2017

Journal of Scientific Computing 1/2017 Zur Ausgabe