Skip to main content

Über dieses Buch

This book deals with one of the most novel advances in mathematical modeling for applied scientific technology, including computer graphics, public-key encryption, data visualization, statistical data analysis, symbolic calculation, encryption, error correcting codes, and risk management. It also shows that mathematics can be used to solve problems from nature, e.g., slime mold algorithms.

One of the unique features of this book is that it shows readers how to use pure and applied mathematics, especially those mathematical theory/techniques developed in the twentieth century, and developing now, to solve applied problems in several fields of industry. Each chapter includes clues on how to use "mathematics" to solve concrete problems faced in industry as well as practical applications.

The target audience is not limited to researchers working in applied mathematics and includes those in engineering, material sciences, economics, and life sciences.





Mathematics: As an Infrastructure of Technology and Science

One of the roles of mathematics is to serve as a language to describe science and technology. The terminology is often common over several branches of science and technology. In this chapter, we describe several basic notions with the emphasis on what is the point of a definition and what are key properties. The objects are taken from set theory, groups and algebras.

Hiroyuki Ochiai

Remarks on Quantum Interaction Models by Lie Theory and Modular Forms via Non-commutative Harmonic Oscillators

As typically the

quantum Rabi model

, particular attention has been paid recently to studying the spectrum of self-adjoint operators with non-commutative coefficients, not only in mathematics but also in theoretical/experimental physics, e.g. aiming at an application to quantum information processing. The

non-commutative harmonic oscillator

(NcHO) is a self-adjoint operator, which is a generalization of the harmonic oscillator, having an interaction term. The Rabi model is shown to be obtained by a second order element of the universal enveloping algebra of the Lie algebra

$$\mathfrak {sl}_2$$



, which is arising from NcHO through the oscillator representation. Precisely, an equivalent picture of the model is obtained as a confluent Heun equation derived from the Heun operator defined by that element via another representation. Though the spectrum of NcHO is not fully known, it has a rich structure. In fact, one finds interesting arithmetics/geometry described by e.g. elliptic curves, modular forms in the study of the spectral zeta function of NcHO. In this article, we draw this picture, which may give a better understanding of interacting quantum models.

Masato Wakayama

Introduction to Public-Key Cryptography

Cryptography was once considered to be a means of maintaining secrecy of communications only in military affairs and diplomacy. However, today, modern cryptography is used for various purposes in familiar circumstances. Public-key cryptography is a key technology of modern society; it is used for personal authentication, electronic commerce on the Internet, copyright protection of DVDs, and so on. In particular, the RSA public-key cryptosystem, which was proposed more than 30 years ago, has become the de facto standard of cryptographic software since the spread of the Internet in the 1990s. Another technology, called elliptic curve cryptography, was proposed in 1985. It can perform arithmetic processing at high speed, and since the beginning of the 2000s, it has been implemented in devices such as DVD players and personal digital assistants. Pairing-based cryptography, first proposed in 2000, can be incorporated in security technologies that are not practical with the previous public-key cryptographies. It is actively studied by various organizations around the world. In this chapter, we explain the basic mathematics and security evaluations of public-key cryptography.

Tsuyoshi Takagi

Code-Based Public-Key Encryption

We present a short survey of public-key encryption (PKE) schemes based on hardness of general decoding. Such the schemes are believed to be resistant even against attacks using quantum computers, which makes them candidates for the so-called post-quantum cryptography. First, we briefly introduce the state-of-the-art in the area of code-based PKE. Then, we describe the McEliece PKE, two major attacks against this scheme and the proposed parameters. Finally, we survey recent results on the variants of this PKE which are proven to be indistinguishable under chosen plaintext and chosen ciphertext attacks.

Kirill Morozov

Gröbner Basis and Its Applications

Computer Algebra is a field of mathematics and computer science that studies algorithms for symbolic computation. A fundamental tool in computer algebra to study polynomial ideals is the theory of Geöbner basis. The notion of the Gröbner basis and the Buchberger’s algorithm for computing it was proposed by Bruno Buchberger in 1965. Gröbner bases have numerous applications in commutative algebra, algebraic geometry, combinatorics, coding theory, cryptography, theorem proving, etc. The Buchberger’s algorithm is implemented in many computer algebra systems, such as










, and


. In this chapter, we will give a short introduction on Gröbner basis theory, and then we will present some applications of Gröbner bases.

Takafumi Shibuta



Stability Analysis for Variational Problems for Surfaces with Constraint

Surfaces with constant mean curvature (CMC surfaces) are critical points of the area functional among surfaces enclosing the same volume. Therefore, they are a simple example of solutions of variational problem with constraint. A CMC surface is said to be stable if the second variation of the area is nonnegative for all volume-preserving variations satisfying the given boundary condition. The purpose of this article is to show some fundamental methods to study the stability for CMC surfaces. Especially, we give a criterion on the stability for compact CMC surfaces with prescribed boundary. Another concept that is closely related to the stability for CMC surfaces is the so-called bifurcation. We give sufficient conditions on a one-parameter family of CMC surfaces so that there exists a bifurcation. Moreover, we give a criterion for CMC surfaces in the bifurcation branch to be stable.

Miyuki Koiso

Discrete Models of Isoperimetric Deformation of Plane Curves

We consider the isoperimetric deformation of smooth curves on the Euclidean plane. It naturally gives rise to a nonlinear partial differential equation called the modified KdV(mKdV) equation as a deformation equation of the curvature, which is known as one of the most typical example of the soliton equations or the integrable systems. The Frenet equation and the deformation equation of the Frenet frame of the curve are the auxiliary linear problem or the Lax pair of the mKdV equation. Based on this formulation, we present two discrete models of isoperimetric deformation of plane curves preserving underlying integrable structure: the discrete deformation described by the discrete mKdV equation and the continuous deformation described by the semi-discrete mKdV equation.

Jun-ichi Inoguchi, Kenji Kajiwara, Nozomu Matsuura, Yasuhiro Ohta

Computing Optimal Cycles of Homology Groups

This is a brief survey concerning the problem of computing optimal cycles of homology groups through linear optimization. While homology groups encode information about the presence of topological features such as holes and voids of some geometrical structure, optimal cycles tighten the representatives of the homology classes. This allows us to infer additional information concerning the location of those topological features. Moreover, by a slight modification of the original problem, we extend it to the case where we have multiple nonhomologous cycles. By considering a more general class of combinatorial structures called complexes, we recast this multiple nonhomologous cycles problem as a single cycle optimization problem in a modified complex. Finally, as a numerical example, we apply the optimal cycles problem to the 3D structure of human deoxyhemoglobin.

Emerson G. Escolar, Yasuaki Hiraoka

Singularity Theory of Differentiable Maps and Data Visualization

In many scientific situations, a given set of large data, obtained through simulation or experiment, can be considered to be a discrete set of sample values of a differentiable map between Euclidean spaces or between manifolds. From such a viewpoint, this article explores how the singularity theory of differentiable maps is useful in the visualization of such data. Special emphasis is put on Reeb graphs for scalar functions and on singular fibers of multi-variate functions.

Osamu Saeki



Mathematical Analysis for Pattern Formation Problems

We explain our theoretical treatment of various kinds of patterns appearing in nature in this paper. We introduce one of our typical approaches to focus on the pattern boundaries and to derive a curvature flow equation for the motion of these boundaries. This approach is based on the idea that patterns are defined by their boundaries.

Shin-ichiro Ei

Models and Applications of Organism Transportation

Organism makes various transportation networks. These many networks have adaptive character, in which the link grows with high-use and degenerates with low-use. In this chapter the mathematical model of adaptive network is introduced. Next, this chapter shows the simulation results by this mathematical model with various parameter. As a result, this chapter shows that how the organism can gain the global function only with the local growth law.

Atsushi Tero

The Renormalization Group Method for Ordinary Differential Equations

The renormalization group (RG) method is one of the singular perturbation methods which provides asymptotic behavior of solutions of differential equations. In this article, how to construct approximate solutions by the RG method is shown with several examples and basic theorems on the RG method, such as an error estimate and the existence of invariant manifolds are given.

Hayato Chiba

A Phase Field Approach to Mathematical Modeling of Crack Propagation

We consider a phase field model for crack propagation in an elastic body. The model is derived as an irreversible gradient flow of the Francfort-Marigo energy with the Ambrosio-Tortorelli regularization and is consistent to the classical Griffith theory. Some numerical examples computed by adaptive mesh finite element method are presented.

Masato Kimura, Takeshi Takaishi

Variational Methods in Differential Equations

This chapter concerns classical variational methods in boundary value problems and a free boundary problem, with a special emphasis on how to view a differential equation as a variational problem. Variational methods are simple, but very powerful analytical tools for differential equations. In particular, the unique solvability of a differential equation reduces to a minimization problem, for which a minimizer is shown to be a solution to the original equation. As a model problem, the Poisson equation with different types of boundary conditions is considered. We begin with the derivation of the equation in the context of potential theory, and then show successful applications of variational methods to these boundary value problems. Finally, we study a free boundary problem by developing the idea to a minimization problem with a constraint.

Michiaki Onodera

Probability and Statistics


Finite Markov Chains and Markov Decision Processes

Markov chains are important tools used for stochastic modeling in various areas of mathematical sciences. The first section of this article presents a survey of the basic notions of discrete-time Markov chains on finite state spaces together with several illustrative examples. Markov decision processes (MDPs), which are also known as stochastic dynamic programming or discrete-time stochastic control, are useful for decision making under uncertainty. The second section will provide a simple formulation of MDPs with finite state spaces and actions, and give two important algorithms for solving MDPs, value iteration and policy iteration, with an example on iPod shuffle.

Tomoyuki Shirai

Introduction to the Premium Principle Based on the Wang Transform

This is a self-contained introductory survey article on the premium principle based on the Wang transform. We give the definition and examples of the Wang transform and prove that the induced premium principle is a coherent risk measure.

Shingo Saito

Stochastic Process Models

A stochastic process model describes how an objective “randomly” varies over time and is typically referred to as an infinite-dimensional random variable

$$X=X(\omega )=\{X_{t}(\omega )\}_{t\in T}$$

















whose value is either a continuous or a càdlàg (right-continuous with left-hand limits) function of

$$t\in T\subset \mathbb {R}_{+}$$





. The probabilistic structure of



can be wonderfully rich, ranging from a piece-wise constant type describing a low-frequency state change to a very rapidly varying type for which we cannot define

$$\int f\mathrm{{d}}X$$




pathwise as the Riemann-Stieltjes integral even for a smooth



; typical examples are a compound-Poisson process and a Wiener process, respectively. Examples of application fields include signal processing (detection, estimation, etc.), population dynamics, finance, hydrology, radiophysics, and turbulence.

Hiroki Masuda

Signal Detection and Model Selection

Signal detection is a basic statistical problem in various fields including engineering, econometrics and psychometrics. It is performed by statistical testing or model selection, but we cannot apply conventional statistical theory to it. The reason is that the signal model, a statistical model for signal detection, has an irregularity, called non-identifiability. Because of this non-identifiability problem, the signal model needs to be shrunk in its geometrical representation. After drawing it, we prove there is an asymptotic property of the likelihood ratio statistics for the model, which is indicated by the geometrical representation. Then, on the basis of this asymptotic property, we introduce a criterion for model selection considering non-identifiability that is a reevaluated Akaike information criterion (AIC). We check the validity of the reevaluated AIC through simulation studies and real data analysis using a factor analysis model, which can be regarded as a kind of signal model.

Yoshiyuki Ninomiya

Regression Analysis and Its Development

Regression analysis aims to predict a target variable statistically by using explanatory variables. The analysis has a long history and is utilized in various situations. We will review linear regression analysis and describe model assessment methods based on the coefficient of determination and Akaike information criterion (AIC). Furthermore, we propose a relative coefficient of determination based on AIC for general statistical modeling. Finally, we illustrate variable selection and discuss recent developments in regression analysis.

Ryuei Nishii

Stochastic Analytical Models in Mathematical Finance

Stochastic analysis is a key tool in the recent study of Mathematical Finance. Stochastic analytical models in Mathematical Finance are classified into two types. One is a discrete model, in which the trading time is restricted to the set of natural numbers, and moreover the underlying probability space is often a finite set. The other is a continuous model, which admits the trading time to be any non-negative real number. In a lot of continuous models, stochastic differential equations govern the time evolution of the models. A short survey on these two models will be given.

Setsuo Taniguchi

An Introduction to the Minimum Description Length Principle

We give a brief introduction to the minimum description length (MDL) principle. The MDL principle is a mathematical formulation of Occam’s razor. It says ‘

simple explanations of a given phenomenon are to be preferred over complex ones

.’ This is recognized as one of basic stances of scientists, and plays an important role in statistics and machine learning. In particular, Rissanen proposed MDL criterion for statistical model selection based on information theory in 1978. After that, much literature has been published and the notion of MDL principle was founded in the 1990s. In this article, we review some important results on the MDL principle.

Jun’ichi Takeuchi

An Introduction to Ergodic Theory

Ergodic theory concerns with the study of the long-time behavior of a dynamical system. An interesting result known as Birkhoff’s ergodic theorem states that under certain conditions, the time average exists and is equal to the space average. The applications of ergodic theory are the main concern of this note. We will introduce fundamental concepts in ergodic theory, Birkhoff’s ergodic theorem and its consequences.

Khanh Duy Trinh

Applied Mathematics


Discrete Optimization: Network Flows and Matchings

In this paper, we give a brief introduction to network flow problems and matching problems that are representative problems in discrete optimization. Network flow problems are used for modeling, e.g., car traffic and evacuation. Matching problems are used when we allocate jobs to workers and assign students to laboratories, and so on. Especially, we focus on mathematical models that are used in these problems.

Naoyuki Kamiyama

Strict Feasibility of Conic Optimization Problems

A conic optimization problem (COP) is the problem of minimizing a given linear objective function over the intersection of an affine space and a closed convex cone. Conic optimization problem is often used for solving nonconvex optimization problems. The strict feasibility of COP is important from the viewpoint of computation. The lack of the strict feasibility may cause the instability of computation. This article provides a brief introduction of COP and a characterization of the strict feasibility of COP. We also explain a facial reduction algorithm (FRA), which is based on the characterization. This algorithm can generate a strictly feasible COP which is equivalent to the original COP, or detect the infeasibility of COP.

Hayato Waki

Theory of Automata, Abstraction and Applications

We introduce computational models, such as sequential machines and automata, using the category theory. In particular, we introduce a generalized theorem which states the existence of the most efficient finite state automaton, called the minimal realization. First, we introduce set theoretical elementary models using sets and functions. We then consider a category of sequential machines which is an abstract model of finite automata. In the category theory, we consider several properties of compositions of morphisms. When we look at the category of sets and functions, we describe properties using equations of compositions of functions. Since the theory of category is a general theory, we can have many concrete properties from a general theorem by assigning it to specific categories such as sets and functions, linear space and linear transformations, etc.

Yoshihiro Mizoguchi

Markov Chain Monte Carlo Algorithms

Markov chain Monte Carlo (MCMC) methods are a general framework of algorithms for generating samples from a specified probability distribution. They are useful when direct sampling from the distribution is unknown. This article describes theory of MCMC, presents two typical MCMC algorithms (Metropolis-Hastings and Gibbs sampling) and three tempering methods (simulated tempering, parallel tempering, and simulated annealing), and discusses the application of MCMC methods to a prediction problem in systems biology.

Osamu Maruyama

Modeling of Fluid Flows by Nonlinear Schrödinger Equation

Fluid flows exhibit diverse ways of their behavior, from ordered to chaotic and turbulent motion. The Navier-Stokes or Euler equations governing such motion are formidable as they are, and even the highest performance computers have difficulty in producing accurate and therefore useful solutions. Effort has constantly been made for mathematically modeling flow phenomena by simplified equations, deriving them from the Navier-Stokes equations and solving them. In this note, we illustrate how we model the nonlinear modulation of a traveling wave, as observed in water waves, by the nonlinear Schrödinger equation. The wave modulation is captured as the instability and bifurcation of a plane-wave solution. Behind this lies the Hamiltonian structure of the Euler equations, and Krein’s theory of the Hamiltonian spectra is applicable to it. We build on it a striking aspect of dissipation and diffusion that drives instability for an otherwise stable solution.

Yasuhide Fukumoto

Financial Applications of Quasi-Monte Carlo Methods

This article overviews major developments in the last two decades on the applications of quasi-Monte Carlo methods to financial computations.

Shu Tezuka

Pure Mathematics and Applied Mathematics are Inseparably Intertwined: Observation of the Early Analysis of the Infinity

In this work I consider the connection between pure mathematics and applied mathematics from a historian’s point of view and I conclude that pure mathematics and applied mathematics are inseparably intertwined.

Masahito Takase

High Performance Computing for Mathematical Optimization Problem

The semidefinite programming (SDP) problem is one of the central problems in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. However, two well-known major bottlenecks, i.e., the generation of the Schur complement matrix (SCM) and its Cholesky factorization, exist in the algorithmic framework of the PDIPM. We have developed a new version of the semidefinite programming algorithm parallel version (SDPARA), which is a parallel implementation on multiple CPUs and GPUs for solving extremely large-scale SDP problems with over a million constraints. SDPARA can automatically extract the unique characteristics from an SDP problem and identify the bottleneck. When the generation of the SCM becomes a bottleneck, SDPARA can attain high scalability using a large quantity of CPU cores and some processor affinity and memory interleaving techniques. SDPARA can also perform parallel Cholesky factorization using thousands of GPUs and techniques for overlapping computation and communication if an SDP problem has over 2 million constraints and Cholesky factorization constitutes a bottleneck. We demonstrate that SDPARA is a high-performance general solver for SDPs in various application fields through numerical experiments conducted on the TSUBAME 2.5 supercomputer, and we solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.713 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.

Katsuki Fujisawa

Application of Mathematics


Modeling of Head-Disk Interface for Magnetic Recording

All the existing models of thin film gas lubrication developed for designing head sliders of hard disk drives are chronologically reviewed so as to show how each model was improved and finally generalized to treat gas lubrication flows for arbitrary Knudsen numbers. Each model is compared with the others using specific examples for benchmarking purposes. A possible approach to the modeling of head-disk interface is also proposed for further consideration that has the potential of addressing one of the extreme operations in which the reader/writer element of the head slider surfs through the lubricant on the disk.

Kanzo Okada

Nonstationary Analysis of Blast Furnace Through Solution of Inverse Problem and Recurrence Plot

A recurrence plot is a method for directly visualizing, on a two-dimensional surface, information regarding the proximal points and distance between two points created using time delay coordinates from temporal sequence data This method qualitatively captures the nonstationary nature of temporal sequence data. However, as the complexity of phenomena increases, the plotted structure of visualizations using two-dimensional surfaces also becomes complex. This can cause problems when trying to interpret changes in the plotted structure. This paper proposes a method of recurrence plot structural analysis, that is, on the basis of deterministic principles. Furthermore, because the proposed method is applied to a blast furnace, which involves the handling of enormous quantities of high-temperature molten iron as well as complex phenomena accompanying reactions in the gas, liquid and solid phases, direct measurement of the internal states when they are treated as spatial distributions is an extremely difficult process. Thus, this study undertakes an analysis of the principles of the determinable properties underlying the temperature shifts in the thermoelectric couples embedded in the brickwork of the furnace floor.

Junichi Nakagawa

Time-Periodic Nonlinear Steady Field Analysis

Error correction Time interval Flexible (ETF) method is presented to fastly obtain time-periodic nonlinear fields in the presence of extremely slow decay fields. The analysis variables are corrected by using the steady-state condition with respect of time variations of the fundamental components. The ETF method is classified into Self ETF and Mutual ETF methods. The time interval of error corrections is flexibly selected, and then step-by-step continuous error corrections are available by using the Mutual ETF method. The ETF method improves the convergence properties of the conventional method like the simplified Time-Periodic Explicit Error Correction (TP-EEC) and the simplified polyphase TP-EEC methods. The presented methods were verified in three-variable simultaneous equations as a simple linear example problem and a nonlinear magnetic field simulation of a synchronous motor by the finite element method as a multivariable problem.

Kenji Miyata

Mathematical Models in First-Principles Calculations for Materials Science

The mathematical models used in first-principles calculations aresummarized, and the uses of mathematics in various industries are introduced. The different approximations used to obtain the Hartree-Fock equation from the Schrödinger equation for multi-atom systems are summarized, and difficulties in solving the Hartree-Fock equation in a self-consistent way are presented. Novel algorithms are needed in order to reduce computational costs of large systems.

Hajime Kobayashi

Mathematics and Manufacturing: The Symbolic Approach

This chapter discusses applications of the symbolic approach to the manufacture of hardware and software. Two example applications, one hardware and the other software, are illustrated. The first example is the design of a hard disk drive (HDD) head by using quantifier elimination (QE), and the other is software validation using symbolic execution. Both examples demonstrate the strengths of the symbolic approach over conventional numerical approaches. While there are, of course, challenges facing the symbolic approach such as faithful modeling and the need for abstraction, it is an extremely powerful and game-changing technology.

Ryusuke Masuoka, Hirokazu Anai

Error Correcting Codes Based on Probabilistic Decoding and Sparse Matrices

These days we encounter many digital storage and communication devices in our daily lives. They contain error correcting codes that operate when data is read from storage devices or received via communication devices. For example, you can listen to music on a compact disc even if its surface is scratched. This article introduces low density parity check (LDPC) codes and the sum-product decoding algorithm. LDPC codes, one class of error correcting codes, have been used for practical applications such as hard disk drives and satellite digital broadcast systems because their performance closely approaches the theoretical limit with manageable computational complexity. In particular, it is shown that an optimal decoding algorithm from the viewpoint of probabilistic inference can be derived with LDPC codes.

Hironori Uchikawa


Weitere Informationen

Premium Partner


    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.