Skip to main content
Top
Published in: Journal of Scientific Computing 1/2019

Open Access 27-03-2019

A Monge–Ampère Problem with Non-quadratic Cost Function to Compute Freeform Lens Surfaces

Authors: N. K. Yadav, J. H. M. ten Thije Boonkkamp, W. L. IJzerman

Published in: Journal of Scientific Computing | Issue 1/2019

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this article, we present a least-squares method to compute freeform surfaces of a lens with parallel incoming and outgoing light rays, which is a transport problem corresponding to a non-quadratic cost function. The lens can transfer a given emittance of the source into a desired illuminance at the target. The freeform lens design problem can be formulated as a Monge–Ampère type differential equation with transport boundary condition, expressing conservation of energy combined with the law of refraction. Our least-squares algorithm is capable to handle a non-quadratic cost function, and provides two solutions corresponding to either convex or concave lens surfaces.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

The optical design problem involving freeform surfaces is a challenging problem, even for a single mirror/lens surface which transfers a given intensity/emittance distribution of the source into a desired intensity/illuminance distribution at the target [13]. More specifically, the freeform design problem is an inverse problem: “Find an optical system containing freeform refractive/reflective surfaces that provides the desired target light distribution for a given source distribution”. Inverse optical design has a wide range of applications from LED based optical products for street lighting and car headlights to applications in medical science, image processing and lithography [1, 4].
To convert a given emittance profile with parallel light rays into a desired illuminance profile with parallel light rays, one requires at least two freeform lens/mirror surfaces [2, 5]. This freeform problem can be formulated as a second order partial differential equation of Monge–Ampère (MA) type, with transport boundary conditions, applying the laws of geometrical optics and energy conservation [2, 3, 6, 7]. For the two-reflector problem [2, 8], one can obtain the following mathematical formulation using properties of geometrical optics, i.e.,
$$\begin{aligned} u_1(\texttt {x}) + u_2(\texttt {y}) = c(\texttt {x}, \texttt {y}) := a_1 + a_2 |\texttt {x}-\texttt {y}|^2, \end{aligned}$$
(1)
where \(a_1, a_2\) are constants and \(u_1(\texttt {x})\), \(u_2(\texttt {y})\) represent the location of the optical surfaces, and |.| denotes the 2-norm for vectors. The right hand side function \(c(\texttt {x}, \texttt {y})\) in the above expression is a quadratic (cost) function. Assuming convex/concave reflective surfaces the ray-trace map can be uniquely expressed as the gradient of \(u_{1}\), i.e.,
$$\begin{aligned} \varvec{m}(\texttt {x}) = \nabla u_1(\texttt {x}). \end{aligned}$$
(2)
Furthermore, using conservation of energy, one can derive a second order partial differential equation (PDE) of MA-type. We refer to this equation as the standard MA-equation, representing optical systems characterized by a quadratic cost function.
In this article, we show that a similar mathematical expression can be obtained for the freeform surfaces of a lens with parallel ingoing and outgoing light rays applying the laws of geometrical optics:
$$\begin{aligned} u_1(\texttt {x}) + u_2(\texttt {y}) = c(\texttt {x}, \texttt {y}) := b_1 - \sqrt{b_2 + b_3 |\texttt {x}-\texttt {y}|^2 }, \end{aligned}$$
(3)
where \(b_1, b_2, b_3\) are constants, and \(u_1(\texttt {x})\), \(u_2(\texttt {y})\) represent the first and second refractive surface of the lens, respectively. For the freeform lens the cost function \(c(\texttt {x}, \texttt {y})\) is no longer a quadratic function, and the ray-trace map can not be expressed as the gradient of some function, we provide more details in Sect. 2. Energy conservation results in a complicated MA-type equation. In this article, we will present a numerical algorithm to compute the freeform surfaces of a lens characterized by a non-quadratic cost function. A rigorous analysis of the existence and uniqueness of weak solutions of similar lens design problems is presented in Oliker’s work [9, 10].
There are several numerical methods which can be employed to compute freeform surfaces of optical systems characterized by a quadratic cost function. However, to the best of our knowledge, this paper is the first to describe a numerical method for the MA-equation with non-quadratic cost function. Froese et al. [1113] solve the standard MA-equation within the framework of optimal mass transport (OMT). Applying the theory of viscosity solutions, they refine the solution using an iterative Fourier-transform algorithm with overcompensation. In recent publications [5, 14, 15], the authors obtain freeform optical surfaces by solving the standard MA-equation using Newton iteration. These numerical methods require an initial guess which is obtained through the OMT problem. Brix et al. [3, 16] solve the standard inverse design problem using a collocation method with a tensor-product B-spline basis. Glimm and Oliker [8, 17] show that the illuminance control problem can be solved using an optimization approach instead of solving a MA-type differential equation. Further, a similar approach to design freeform surfaces of a lens is developed by Rubinstein and Wolansky [18].
A least-squares (LS) method [2, 7] has been presented to solve the standard MA-equation to compute single reflector/lens or double reflector freeform surfaces optical systems. The method provides the optical mapping which transfers the given emittance of the source into the desired illuminance at the target, and the freeform surfaces are obtained via this mapping.
However, the coupled freeform lens surfaces design problem corresponds to a non-quadratic cost function. The goal of this paper is to present a numerical method which is applicable to design an optical system corresponding to a non-quadratic cost function. Here, we present a fast and effective extended least-squares (ELS) method to construct the freeform surfaces of the lens. The ELS-method is a two-stage procedure like the LS-method: first we determine an optimal mapping by minimizing three functionals iteratively, next, we compute the freeform surfaces from the converged mapping. In the first stage, there are two nonlinear minimization steps, which can be performed point-wise, like in the LS-method. In the third step two elliptic partial differential equations have to be solved. For the LS-method, these are decoupled Poisson equations. However, in the ELS-method these are coupled elliptic equations.
Our least-squares method is quite generally applicable since it can handle arbitrary twice differentiable cost functions \(c(\texttt {x}, \texttt {y})\), also in other fields of science and engineering such as optimal transport theory, shape optimization, compression modeling, relativistic theory, incompressible fluid flow, economics, astrophysics, atmospheric sciences etc. For the interested reader, we refer to the following: Evans’ survey notes [19], articles of Bouchittè-Buttazzo [20, 21], Gangbo’s lecture notes [22], and paper of Benamou–Brenier [23]. However, we restrict ourselves to the computation of freeform optical systems.
This paper is structured as follows. In Sect. 2 we explain the geometrical structure of the optical system and formulate the mathematical model. The detailed procedure of the proposed least-squares method is shown in Sect. 3. We apply the numerical method to four test problems in Sect. 4 and verify the solutions using a ray tracing algorithm [2]. Finally, a brief discussion and concluding remarks are given in Sect. 5.

2 Formulation of the Problem

The geometrical structure of a lens optical system is shown schematically in Fig. 1. Let \((x_1, x_2, z)\in \mathbb {R}^3\) denote the Cartesian coordinates with z the horizontal coordinate and \(\texttt {x} = (x_1, x_2)\in \mathbb {R}^2\) the coordinates in the plane \( z = 0\), denoted by \(\alpha _1\), and let \(\mathcal {S}\) be a bounded source domain in the plane \(\alpha _1\). The source \(\mathcal {S}\) emits parallel light rays which propagate in the positive z-direction. The emittance, i.e., luminous flux per unit area (for an introduction to photometry quantities see e.g. [24, p. 7–9]), of the source is given by \(f(\texttt {x})~ [{\mathrm {lm/m}}^{2}],~ \texttt {x}\in \mathcal {S} \), where f is a non-negative integrable function on the domain \(\mathcal {S}\). The target at a distance \(\ell >0\) from the plane \(\alpha _1\) is denoted by \(\mathcal {T}\).
The incoming light rays are refracted at the first lens surface \(\mathcal {L}_1\), propagate through the lens and are refracted again at the second lens surface \(\mathcal {L}_2\), to create a parallel bundle of light rays in the positive z-direction. The index of refraction of the lens \(n>1\) and the surrounding medium is air with refractive index of unity. The lens surfaces are defined as \(z \equiv u_1(\texttt {x})\), \(\texttt {x} \in \mathcal {S}\) and \( w \equiv \ell -z = u_2(\texttt {y})\), \(\texttt {y}\in \mathcal {T}\), respectively, where \(\texttt {y} = (y_1, y_2)\in \mathbb {R}^2\) are the Cartesian coordinates of the target plane \(\alpha _2\).
The goal is to design a lens system such that after two refractions the refracted rays must form a parallel beam, propagating in the positive z-direction, and provide a prescribed illuminance \(g(\texttt {y}) ~ [{\mathrm {lm/m}}^{2}]\) at the plane \( \alpha _2 : z = \ell \), where \(g>0\) is a positive integrable function on the domain \(\mathcal {T}\). It is assumed that both \(\mathcal {L}_1\) and \(\mathcal {L}_2\) are perfect lens surfaces and no energy is lost in the refraction.

2.1 Geometrical Formulation of the Freeform Lens

In this section, we first give an expression for the ray-trace map, and secondly we derive a mathematical formulation for the location of the freeform surfaces using the laws of geometrical optics.
The mapping \(\varvec{m}\) can be derived by tracing a typical ray through the optical system. Let us consider a ray emitted from a position \(\texttt {x}\in \mathcal {S}\) on the source and propagating in the positive z-direction, let \({\hat{\varvec{s}}}\) be the unit direction of the incident ray. The ray strikes the first lens surface \(\mathcal {L}_1\), refracts off in direction \({\hat{\varvec{t}}}\), strikes the second lens surface \(\mathcal {L}_2\), and reflects off, again in the direction \({\hat{\varvec{s}}}\). The unit surface normal of the first lens surface \(\mathcal {L}_1\), directed towards the light source, is given by
$$\begin{aligned} {\hat{\varvec{n}}}_1 = \frac{(\nabla u_1, -1)}{\sqrt{|\nabla u_1|^2 +1}}. \end{aligned}$$
(4)
Throughout this article, we use the convention that a hat denotes a unit vector. According to Snell’s law [24, 25], the direction \(\hat{\varvec{t}} = \hat{\varvec{t}}(\texttt {x})\) of the refracted ray can be expressed as
$$\begin{aligned} {\hat{\varvec{t}}} = \eta {\hat{\varvec{s}}} + F(|\nabla u_1|;\eta ){\hat{\varvec{n}}}_1, \end{aligned}$$
(5)
where \(\eta = 1/n < 1 \) with n the refractive index of the lens and
$$\begin{aligned} F(z;\eta ) = \frac{1}{\sqrt{z^2+1}}\Big [ \eta - \sqrt{1 + (1-\eta ^2)z^2}\Big ]. \end{aligned}$$
(6)
If we write \({\hat{\varvec{t}}} = (t_1,t_2,t_3)^T\) then the first two components of the vector \({\hat{\varvec{t}}}\), can be written as a function of the third component of the vector \({\hat{\varvec{t}}}\) as
$$\begin{aligned} \begin{pmatrix} t_1 \\ t_2 \end{pmatrix} = (\eta -t_3) \nabla u_1. \end{aligned}$$
(7)
The image on the target of the point \(\texttt {x}\in \mathcal {S}\) is the point \(\texttt {y} \in \mathcal {T}\) under the ray trace mapping \(\varvec{m}\), i.e., \(\texttt {y} = \varvec{m}(\texttt {x}), \texttt {x}\in \mathcal {S}\). This mapping can be obtained by the projection of \({\hat{\varvec{t}}}\) on the plane \(\alpha _1\), i.e.,
$$\begin{aligned} \varvec{m}(\texttt {x}) = \texttt {x} + \begin{pmatrix} t_1 \\ t_2 \end{pmatrix} d(\texttt {x}), \end{aligned}$$
(8)
where \(d(\texttt {x})\) is the distance between surfaces \(\mathcal {L}_1\) and \(\mathcal {L}_2\) along the ray refracted in the direction \({\hat{\varvec{t}}}(\texttt {x})\). The distance \(d(\texttt {x})\) between the lens surfaces can be obtained using properties of geometrical optics: The total optical path length \(L(\texttt {x})\) corresponding to the ray associated with a point \(\texttt {x} \in \mathcal {S}\), is given by
$$\begin{aligned} L(\texttt {x}) = u_1(\texttt {x}) + nd(\texttt {x}) + u_2(\texttt {y}). \end{aligned}$$
(9)
The theorem of Malus and Dupin (the principle of equal optical path lengths) states that the total optical path length between any two orthogonal wavefronts is the same for all rays [26, p. 130]. As we deal with two parallel beams of light rays, the wavefront coincides with planes \(\alpha _1\) and \(\alpha _2\). Therefore, the total optical path length will be independent of the position vector \(\texttt {x}\), i.e., \(L(\texttt {x}) = L\). The horizontal distance \(\ell \) between the source and the target plane is given by
$$\begin{aligned} \ell = u_1(\texttt {x}) + ({\hat{\varvec{s}}}\varvec{\cdot }{\hat{\varvec{t}}})d(\texttt {x}) + u_2(\texttt {y}). \end{aligned}$$
(10)
Subtracting Eqs. (9) and (10), and using Eq. (5), we obtain the following expression
$$\begin{aligned} d(\texttt {x}) = \frac{\beta }{n-t_3}, \end{aligned}$$
(11)
where where \(\beta = L-\ell \) is the “reduced” optical path length. Substituting (7) and (11) in (8), we have
$$\begin{aligned} \varvec{m}(\texttt {x}) = \texttt {x} + \beta \frac{\eta -t_3}{n-t_3}. \end{aligned}$$
(12)
Now, substituting \(t_3\) in the above equation from the law of refraction (5), the mapping \(\varvec{m}\) is given by the relation
$$\begin{aligned} \varvec{m}(\texttt {x}) = \texttt {x} - \frac{\beta \nabla u_1(\texttt {x})}{\sqrt{n^2 + (n^2 -1) |\nabla u_1|^2} } . \end{aligned}$$
(13)
Next, we derive a mathematical expressions for the location of the lens surfaces. An alternative expression for the distance d reads
$$\begin{aligned} d^2 = \big ( \ell - u_1(\texttt {x}) - u_2(\texttt {y}) \big )^2 + |\texttt {x} - \texttt {y}|^2 . \end{aligned}$$
(14)
Thus, from Eqs. (9) and (14), we obtain
$$\begin{aligned} n^2 \big ( \ell - u_1(\texttt {x}) - u_2(\texttt {y}) \big )^2 + n^2 |\texttt {x} - \texttt {y}|^2 = \big (L - u_1(\texttt {x}) - u_2(\texttt {y})\big )^2 , \end{aligned}$$
which can be rewritten as
$$\begin{aligned} \bigg [u_1(\texttt {x}) + u_2(\texttt {y}) + \frac{L-n^2 \ell }{n^2-1} \bigg ]^2 + \frac{n^2}{n^2 -1} |\texttt {x} - \texttt {y}|^2 = \bigg ( \frac{n\beta }{n^2-1}\bigg )^2 , \end{aligned}$$
and after elementary algebraic derivations, we obtain
$$\begin{aligned} u_1(\texttt {x}) + u_2(\texttt {y}) = \frac{n^2 \ell -L}{n^2-1} \pm \frac{n}{n^2-1} \sqrt{\beta ^2 - (n^2-1) |\texttt {x} - \texttt {y}|^2 }. \end{aligned}$$
(15)
This is a mathematical expression for the location of the lens surfaces but the sign in front of the square root is unknown yet. To determine this we proceed as follows. Using Eqs. (9) (with \(L(\texttt {x}) = L\)) and (14), we can show that \(\beta ^2 - (n^2-1)|\texttt {x} - \texttt {y}|^2 = ( n\beta - d(n^2-1))^2 \ge 0\). Substituting, this expression in Eq. (15), we obtain
$$\begin{aligned} u_1(\texttt {x}) + u_2(\texttt {y}) = \frac{n^2 \ell -L}{n^2-1} \pm \frac{n}{n^2-1}\big |n\beta - d(n^2-1)\big | . \end{aligned}$$
(16)
First, we check the sign of the expression \( n\beta - d(n^2-1)\). Substituting d from Eq. (11), the expression becomes
$$\begin{aligned} n\beta - d(n^2-1) = \beta \frac{1-n t_3}{n-t_3}. \end{aligned}$$
Since \(\beta >0\) and \(n-t_3>0\), it remains to check the sign of \(1-nt_3\). Using the vectorial form of the law of refraction (5) and expression (6), we can write
$$\begin{aligned} 1-nt_3 = \frac{1 - \sqrt{n^2 + (n^2- 1)|\nabla u_1|^2} }{|\nabla u_1|^2+1} < 0, \end{aligned}$$
(17)
as \(n>1\). Thus we have to choose the negative sign in front of the absolute value in Eq. (16). Hence, we obtain
$$\begin{aligned} u_1(\texttt {x}) + u_2(\texttt {x}) = \frac{n^2 \ell -L}{n^2-1} \pm \frac{nd(n^2-1)-n^2\beta }{n^2-1}. \end{aligned}$$
Substituting d from relation (9), the above expression becomes
$$\begin{aligned} u_1(\texttt {x}) + u_2(\texttt {y}) = \frac{n^2 \ell -L}{n^2-1}\pm \Big ( L- (u_1(\texttt {x}) + u_2(\texttt {y})) - \frac{n^2}{n^2-1} \beta \Big ). \end{aligned}$$
(18)
In the above equation, the right hand side equals the left hand side for the minus sign, therefore we have to choose minus sign in (15). Thus the mathematical expression for the lens surfaces becomes
$$\begin{aligned} \begin{aligned} u_1(\texttt {x}) + u_2(\texttt {y})&= c(\texttt {x}, \texttt {y}),\\ c(\texttt {x}, \texttt {y})&= \ell - \frac{\beta }{n^2-1} - \frac{n}{n^2-1} \sqrt{\beta ^2 - (n^2-1) |\texttt {x} - \texttt {y}|^2 }. \end{aligned} \end{aligned}$$
(19)
These kind of freeform optical design problems are closely related to the mass transport problem [10, 27]. The right hand side function \(c(\texttt {x},\texttt {y})\) is known as the cost function in OMT theory.
To conclude, we have derived a mathematical formulation representing the freeform lens optical system which is given in (19). Also, we obtained the expression (13) for the ray-trace mapping \(\varvec{m}\). Next, we formulate a second order partial differential equation for the freeform lens.

2.2 Energy Conservation for the Freeform Lens

Recall, that \(f\ge 0\) and \(g>0\) are integrable functions and no energy is lost in the light transfer process. Thus energy conservation is given by
$$\begin{aligned} \iint _\mathcal {S} f(\texttt {x})\mathrm {d}{} \texttt {x} = \iint _\mathcal {T} g(\texttt {y})\mathrm {d}{} \texttt {y}. \end{aligned}$$
(20)
The key tool for the design of such an optical system is to find a mapping \(\texttt {y} = \varvec{m}(\texttt {x}): \mathcal {S} \rightarrow \mathcal {T} \) that satisfies the energy conservation constraint (20) for each measurable set \(\mathcal {A}\subset \mathcal {S}\), i.e.,
$$\begin{aligned} \iint _\mathcal {A} f(\texttt {x})\mathrm {d}{} \texttt {x} = \iint _{\varvec{m}(\mathcal {A})} g(\texttt {y})\mathrm {d}{} \texttt {y}, \end{aligned}$$
(21)
and after a change of variables the constraint becomes
$$\begin{aligned} f(\texttt {x}) = g(\varvec{m}(\texttt {x}))|\det (\mathrm {D}\varvec{m}(\texttt {x}))|, \quad \forall \texttt {x}\in \mathcal {S}, \end{aligned}$$
(22)
where \(\mathrm {D}\varvec{m}\) is the Jacobian of the mapping \(\varvec{m}\), which measures the expansion/contraction of a tube of rays due to the two refractions. The accompanying boundary condition is derived from the condition that all the light from the source domain \(\mathcal {S}\) must be transferred into the target domain \(\mathcal {T}\), i.e.,
$$\begin{aligned} \varvec{m}(\partial \mathcal {S}) = \partial \mathcal {T}, \end{aligned}$$
(23)
stating that the boundary of the source \(\mathcal {S}\) is mapped to the boundary of the target \(\mathcal {T}\). This is a consequence of the edge ray principle [28].
Next, we derive a MA-type equation for the freeform lens using the energy conservation constraint (22) and the mathematical formulation (19) for the location of the lens surfaces. We assume that both lens surfaces \(u_1\) and \( u_2\) are either c-convex or c-concave functions. According to the following definition, the lens surfaces \(u_1\) and \(u_2\) are c-convex if
$$\begin{aligned} u_1(\texttt {x})&= \max _{\texttt {y}\in \mathcal {T}}\lbrace c(\texttt {x},\texttt {y})-u_2(\texttt {y}) \rbrace \quad \forall \quad \texttt {x}\in \mathcal {S}, \end{aligned}$$
(24a)
$$\begin{aligned} u_2(\texttt {y})&= \max _{\texttt {x}\in \mathcal {S}}\lbrace c(\texttt {x},\texttt {y})-u_1(\texttt {x}) \rbrace \quad \forall \quad \texttt {y}\in \mathcal {T} , \end{aligned}$$
(24b)
alternatively, these are c-concave if
$$\begin{aligned} u_1(\texttt {x})&= \min _{\texttt {y}\in \mathcal {T}}\lbrace c(\texttt {x},\texttt {y})-u_2(\texttt {y}) \rbrace \quad \forall \quad \texttt {x}\in \mathcal {S}, \end{aligned}$$
(25a)
$$\begin{aligned} u_2(\texttt {y})&= \min _{\texttt {x}\in {\mathcal {S}}}\lbrace c(\texttt {x},\texttt {y})-u_1(\texttt {x}) \rbrace \quad \forall \quad \texttt {y}\in \mathcal {T} . \end{aligned}$$
(25b)
For a continuously differentiable function \(c\in C^1(\mathcal {S}\times \mathcal {T})\), the c-convex/concave functions \(u_1\) and \(u_2\) are Lipschitz continuous [27, 29], and the mapping \(\texttt {y} = \varvec{m}(\texttt {x})\) is implicitly given by the relation
$$\begin{aligned} \nabla _{\texttt {x}}u_1(\texttt {x}) = \nabla _{\texttt {x}}c(\texttt {x},\varvec{m}(\texttt {x})), \end{aligned}$$
(26)
which is a necessary condition for (24b) and (25b), and holds under the condition that the Jacobi matrix \(\varvec{C} = \mathrm {D}_{\texttt {x}{} \texttt {y}} c\) defined by
$$\begin{aligned} \varvec{C} = \begin{pmatrix} c_{11} &{} c_{12} \\ c_{21} &{} c_{22} \end{pmatrix} = \begin{pmatrix} \frac{\partial ^2 c}{\partial x_1 \partial y_1} &{} \frac{\partial ^2 c}{\partial x_1 \partial y_2} \\ \frac{\partial ^2 c}{\partial x_2 \partial y_1} &{} \frac{\partial ^2 c}{\partial x_2 \partial y_2} \end{pmatrix} , \end{aligned}$$
(27)
is invertible. For our optical problem the mapping \(\varvec{m}\) given by relation (13) satisfies relation (26) indeed.
The matrix \(\varvec{C}\) is symmetric negative semi-definite which is a consequence of the fact that the function c depends on \(|\texttt {x}-\texttt {y}|\). This can be verified as follows: let us rewrite the cost function (19) as
$$\begin{aligned} c(\texttt {x}, \texttt {y})&= \ell - \frac{\beta }{n^2 - 1} + \tilde{c}(\texttt {x}, \texttt {y}), \end{aligned}$$
(28a)
$$\begin{aligned} \tilde{c}(\texttt {x}, \texttt {y})&= - \frac{n}{n^2-1} \sqrt{\beta ^2 - (n^2-1) |\texttt {x} - \texttt {y}|^2}. \end{aligned}$$
(28b)
By differentiating (28) with respect to \(\texttt {x}\) and \(\texttt {y}\), we obtain
$$\begin{aligned} \nabla _\texttt {x} c(\texttt {x}, \texttt {y})&= -\frac{n^2}{n^2-1} \frac{1}{\tilde{c}} (\texttt {x} - \texttt {y}), \end{aligned}$$
(29a)
$$\begin{aligned} \nabla _\texttt {y} c(\texttt {x}, \texttt {y})&= \frac{n^2}{n^2-1} \frac{1}{\tilde{c}} (\texttt {x} - \texttt {y}), \end{aligned}$$
(29b)
which gives
$$\begin{aligned} \nabla _\texttt {x} c(\texttt {x}, \texttt {y}) + \nabla _\texttt {y} c(\texttt {x}, \texttt {y}) = 0. \end{aligned}$$
(29c)
Differentiating one more time with respect to \(\texttt {x}\), we conclude that
$$\begin{aligned} \varvec{C}= \mathrm {D}_{\texttt {x}{} \texttt {y}}c = -\mathrm {D}_{\texttt {x}{} \texttt {x}}c. \end{aligned}$$
(30)
Evaluating all derivatives, we obtain the following expression
$$\begin{aligned} \varvec{C}= \frac{n^2}{n^2-1} \frac{\varvec{I}}{\tilde{c}} + \Big (\frac{n^2}{n^2-1} \Big )^2 \frac{1}{\tilde{c}^3} \begin{pmatrix} (x_1 - y_1)^2 &{} (x_1 -y_1)(x_2-y_2) \\ (x_1 -y_1)(x_2-y_2) &{} (x_2 - y_2)^2 \end{pmatrix}. \end{aligned}$$
(31)
We can rewrite the above expression as follows:
$$\begin{aligned} \varvec{C}= \frac{\gamma ^2}{\tilde{c}^3} \bigg ( \frac{\tilde{c}^2}{\gamma } \varvec{I}+ (\texttt {x}-\texttt {y}) (\texttt {x}-\texttt {y})^T \bigg ), \end{aligned}$$
(32)
where \(\gamma = n^2 /(n^2-1) > 0\). Since \(\tilde{c} < 0\), we conclude that \(\det (\varvec{C}) >0\) and \({{\,\mathrm{tr}\,}}(\varvec{C})\le 0\) hence the matrix \(\varvec{C}\) is symmetric negative semi-definite.
Since the function \(c(\texttt {x}, \texttt {y})\) defined in (19) is continuously differentiable, from relation (26), we deduce
$$\begin{aligned} \varvec{C} \mathrm {D}\varvec{m}(\texttt {x}) = \mathrm {D}^2 u_1(\texttt {x}) - \mathrm {D}_{\texttt {x}{} \texttt {x}} c \equiv \varvec{P}, \end{aligned}$$
(33)
where \(\mathrm {D}^2 u_1\) is the Hessian of \(u_1\). The matrix \(\varvec{P} = \mathrm {D}^2 u_1(\texttt {x}) - \mathrm {D}_{\texttt {x}{} \texttt {x}} c\) is negative semi-definite for a c-concave pair \((u_1, u_2)\) and positive semi-definite for a c-convex pair \((u_1, u_2)\). In the following, we discuss the convex case, thus we require the matrix \(\varvec{P}\) to be positive semi-definite. Substituting \(\mathrm {D}\varvec{m}\) from (33) into the energy conservation condition (22), we obtain
$$\begin{aligned} \frac{\det (\varvec{P}(\texttt {x}))}{\det (\varvec{C}(\texttt {x}, \varvec{m}(\texttt {x})))} = \frac{f(\texttt {x})}{g(\varvec{m}(\texttt {x}))}, \quad \forall \quad \texttt {x}\in \mathcal {S}. \end{aligned}$$
(34)
We know that the \(2\times 2\) matrix \(\varvec{P}\) is positive semi-definite if and only if
$$\begin{aligned} {{\,\mathrm{tr}\,}}(\varvec{P})\ge 0 \quad \text {and} \quad \det (\varvec{P}) \ge 0. \end{aligned}$$
(35)
Because \(\det (\varvec{C}) > 0\) and the right hand side functions \(f \ge 0, g>0\) in Eq. (34), it is obvious that \(\det (\varvec{P}) \ge 0\). So, the only requirement left is \({{\,\mathrm{tr}\,}}(\varvec{P}) \ge 0\) for convex optical surfaces.
In the following section, we give a detailed description of the ELS-algorithm to solve the MA-equation (34) with the boundary condition (23) and constraints (35). The method presented here is based on [7]. Compared to [7] we deal with a non-quadratic cost function that results in the presence of the matrix \(\varvec{C}\) in (34).

3 Numerical Algorithm

Prins et al. [7] introduced a least-squares method to compute single freeform surfaces governed a quadratic cost function. Further, we applied the method to design a two-reflector optical system [2], which is also a quadratic cost problem. Our version of the least-squares method was inspired by publications by Caboussat et al. [30, 31], who developed a least-squares method for the Monge–Ampère–Dirichlet problem. An extension of their method to the three-dimensional equation is presented in [32].
In this section, we extend the least-squares method to compute the freeform surfaces of a lens characterized by a non-quadratic cost function. The ELS-method is a two-stage procedure. In the first stage we calculate the optimal mapping by minimizing three functionals iteratively, and in the second stage we compute the freeform surfaces from the mapping in the least squares sense.

3.1 First Stage: Calculation of the Mapping

First, we calculate the mapping \(\varvec{m}\) using the least-squares method for the lens optical system as follows: we enforce the equality \(\varvec{C}\mathrm {D}\varvec{m} = \varvec{P}\) by minimizing the following functional
$$\begin{aligned} J_\mathrm {I}(\varvec{m}, \varvec{P}) = \frac{1}{2} \iint _\mathcal {S} ||\varvec{C}\mathrm {D}\varvec{m} - \varvec{P}||^2 \mathrm {d}{} \texttt {x}. \end{aligned}$$
(36)
The norm used in this functional is the Frobenius norm, which is defined as follows. Let \(\varvec{A:B}\) denote the Frobenius inner product of the matrices \(\varvec{A} = (a_{ij})\) and \(\varvec{B} = (b_{ij})\), defined by
$$\begin{aligned} \varvec{A:B} = \sum _{i,j} a_{ij} b_{ij} , \end{aligned}$$
(37)
the Frobenius norm is then defined as \(||\varvec{A}|| = \sqrt{\varvec{A:A}}\). Next, we address the boundary by minimizing the functional
$$\begin{aligned} J_\mathrm {B}(\varvec{m}, \varvec{b}) = \frac{1}{2} \oint _{\partial \mathcal {S}} |\varvec{m} - \varvec{b}|^2 \mathrm {d}s. \end{aligned}$$
(38)
We combine the functionals \(J_\mathrm {I}\) for the interior and \(J_\mathrm {B}\) for the boundary domain by a weighted average:
$$\begin{aligned} J(\varvec{m}, \varvec{P}, \varvec{b}) = \alpha J_\mathrm {I}(\varvec{m}, \varvec{P}) + (1-\alpha )J_\mathrm {B}(\varvec{m}, \varvec{b}). \end{aligned}$$
(39)
The parameter \(\alpha \)\((0<\alpha <1)\) controls the weight of the first functional compared to the second functional. The variables \(\varvec{b}, \varvec{P}\), and \(\varvec{m}\) are elements of the following spaces
$$\begin{aligned} \mathcal {B}&= \lbrace \varvec{b}\in [C(\partial \mathcal {S})]^2 ~ | ~ \varvec{b}(\texttt {x}) \in \partial \mathcal {T} \rbrace , \end{aligned}$$
(40a)
$$\begin{aligned} \mathcal {P}(\varvec{m})&= \bigg \lbrace \varvec{P} \in [C^1(\mathcal {S})]^{2\times 2} ~\big |~ \frac{\det (\varvec{P})}{\det (\varvec{C}(\varvec{\cdot }, \varvec{m}))} = \frac{f}{g(\varvec{m})} \bigg \rbrace , \end{aligned}$$
(40b)
$$\begin{aligned} \mathcal {M}&= [C^2(\mathcal {S})]^2 , \end{aligned}$$
(40c)
respectively. The minimizer gives us the mapping \(\varvec{m}\) which is implicitly related to the surface function \(u_1\). We calculate this minimizer by repeatedly minimizing over the three spaces separately. We start with an initial guess \(\varvec{m}^0\), which will be specified shortly, and we calculate the matrix \(\varvec{C}(\texttt {x}, \varvec{m}^0)\) at the initial guess \(\varvec{m}^0\). Subsequently, we perform the iteration
$$\begin{aligned} \varvec{b}^{n+1}&= \underset{\varvec{b} \in \mathcal {B}}{{{\,\mathrm{argmin}\,}}} ~ J_\mathrm {B}(\varvec{m}^n, \varvec{b}), \end{aligned}$$
(41a)
$$\begin{aligned} \varvec{P}^{n+1}&= \underset{\varvec{P} \in \mathcal {P}(\varvec{m}^n)}{{{\,\mathrm{argmin}\,}}} J_\mathrm {I}(\varvec{m}^n, \varvec{P}), \end{aligned}$$
(41b)
$$\begin{aligned} \varvec{m}^{n+1}&= \underset{\varvec{m} \in \mathcal {M}}{{{\,\mathrm{argmin}\,}}} ~ J(\varvec{m}, \varvec{P}^{n+1}, \varvec{b}^{n+1}). \end{aligned}$$
(41c)
Next, we compute the matrix \(\varvec{C}(\texttt {x}, \varvec{m}^{n})\) before going to the next iteration.
We initialize our minimization procedure by constructing an initial guess \(\varvec{m}^0\) which maps a bounding box of the source area \(\mathcal {S}\) to a bounding box of the target area \(\mathcal {T}\). Without loss of generality we assume the smallest bounding box of the source and the target are rectangular and denote these by \([a_{\min } , a_{\max }] \times [b_{\min }, b_{\max }]\) and \([c_{\min }, c_{\max }]\times [d_{\min }, d_{\max }]\), respectively. Then the initial guess reads:
$$\begin{aligned} m_1 ^0&= \frac{x_1 - a_{\min }}{a_{\max } - a_{\min }} c_{\min } + \frac{a_{\max } - x_1}{a_{\max } - a_{\min }} c_{\max } , \end{aligned}$$
(42a)
$$\begin{aligned} m_2 ^0&= \frac{x_2 - b_{\min }}{b_{\max } - b_{\min }} d_{\min } + \frac{b_{\max } - x_2}{b_{\max } - b_{\min }} d_{\max }. \end{aligned}$$
(42b)
Note that the corresponding Jacobi matrix \(\mathrm {D}\varvec{m}^0\) of the initial condition is symmetric (in fact diagonal) negative definite. The matrix \(\varvec{C}\) is also negative definite, moreover from relation (32) we conclude that \(c_{11}, c_{22} <0\) which implies that the matrix \(\varvec{P} = \varvec{C(\texttt {x},\varvec{m}^0)}\mathrm {D}\varvec{m}^0\) is positive definite. Thus this initialization satisfies our requirement \({{\,\mathrm{tr}\,}}(\varvec{P}) \ge 0\).
Obviously, the minimization steps in (41) as well as the computation of the optical surfaces is done numerically. To that purpose we discretize the source \(\mathcal {S}\) with a standard rectangular \(N_1\times N_2\) grid for some \(N_1, N_2 \in \mathbb {N}\), so the grid points \(\texttt {x}_{ij} = (x_{1,i}, x_{2,j})\) are defined as
$$\begin{aligned} x_{1,i}&= a_{\min } + (i-1)h_1, \quad h_1 = \frac{a_{\max }- a_{\min }}{N_1 -1}, \quad i = 1,\ldots , N_1, \end{aligned}$$
(43a)
$$\begin{aligned} x_{2,j}&= b_{\min } + (j-1)h_2, \quad h_2 = \frac{b_{\max }- b_{\min }}{N_2 -1}, \quad j = 1,\ldots , N_2. \end{aligned}$$
(43b)
We start the iteration process (41) using initial guess \(\varvec{m}^0\). Here each iteration consists of four steps: we perform the three minimization steps (41a)–(41c), and fourthly we update the matrix C at every iteration. In this article, we give a detailed description of the minimization steps (41b) and (41c). The minimization step first (41a) is simple and direct, and performed point-wise because no derivative of \(\varvec{b}\) with respect to \(\texttt {x}\) appears in the functional, more details can be found in [7].
Finally, from the converged mapping \(\varvec{m}\), we compute the first lens surface \(u_1\) via relation (26) in a least-squares sense, and the second lens surface \(u_2\) from relation (19), see Sect. 3.2.
Minimizing procedure for \(\varvec{P}\)
We assume \(\varvec{m}\) fixed and minimize \(J_\mathrm {I}(\varvec{m}, \varvec{P})\) over the matrices under the condition (34). Since the integrand of \(J_\mathrm {I}(\varvec{m},\varvec{P})\) does not contain derivatives of \(\varvec{P}\), the minimization procedure can be done pointwise. So, we need to minimize \(||\varvec{C}\varvec{D} - \varvec{P}||\) for each grid point \(\texttt {x}_{ij}\in \mathcal {S}\), where \(\varvec{D}\) is the central difference approximation of \(\mathrm {D}\varvec{m}\). Let’s define
$$\begin{aligned} \varvec{P} = \begin{pmatrix} p_{11} &{} p_{12} \\ p_{12} &{} p_{22} \end{pmatrix}, \quad \varvec{D} = \begin{pmatrix} d_{11} &{} d_{12} \\ d_{21} &{} d_{22} \end{pmatrix}, \quad \varvec{Q} = \varvec{CD} = \begin{pmatrix} q_{11} &{} q_{12} \\ q_{12} &{} q_{22} \end{pmatrix}, \end{aligned}$$
(44a)
with
$$\begin{aligned} d_{11} = \delta _{x_1} m_1,\quad d_{12} = \delta _{x_2} m_1,\quad d_{21} = \delta _{x_1} m_2,\quad d_{22} =\delta _{x_2} m_2, \end{aligned}$$
(44b)
where \(\delta _{x_1}\) and \(\delta _{x_2}\) are the central difference approximations of \(\partial /\partial x_1\) and \(\partial /\partial x_2\), respectively. Note that, the matrices \(\varvec{C}, \varvec{D}, \varvec{P}\) and \(\varvec{Q}\) all depend on \(\texttt {x}_{ij}\). For the sake of brevity we omit \(\texttt {x}_{ij}\). Moreover, we want to avoid crossing grid lines, i.e., intersection of images of grid lines in \(\mathcal {S}\), and for that reason we require \(d_{11}, d_{22} > 0\). This can be achieved by imposing
$$\begin{aligned} d_{11} = \max (\delta _{x_1} m_1, \varepsilon ), \quad d_{22} = \max (\delta _{x_2} m_2, \varepsilon ), \end{aligned}$$
(45)
for a threshold value \(\varepsilon >0\). This implies that \(m_1(\texttt {x}_{i+1,j}) < m_1(\texttt {x}_{i,j})\) and \(m_2(\texttt {x}_{i,j+1}) < m_2(\texttt {x}_{i,j})\) for all \(\texttt {x}_{i,j} \in \mathcal {S}\), which assures that there is no crossing of grid lines. In our computations we choose \(\varepsilon = 10^{-8}\).
Note that the matrix \(\varvec{P}\) is symmetric but the matrix \(\varvec{D}\) need not be symmetric and \(d_{12}, d_{21} <0\) is possible. Next we define the function
$$\begin{aligned} H(p_{11}, p_{22}, p_{12}) = \frac{1}{2} ||\varvec{Q} - \varvec{P} ||^2 . \end{aligned}$$
(46)
Also, we define the matrix \(\varvec{Q}_\mathrm {S}\) as the symmetric part of the matrix \(\varvec{Q}\), i.e.,
$$\begin{aligned} \varvec{Q}_\mathrm {S} = \frac{1}{2}(\varvec{Q} + \varvec{Q}^T) = \begin{pmatrix} q_{11} &{} q_\mathrm {S} \\ q_\mathrm {S} &{} q_{22} \end{pmatrix}, \end{aligned}$$
(47)
with \(q_\mathrm {S} = \frac{1}{2}(q_{12}+q_{21})\). The function \(H_S\) corresponding to the symmetric matrix \(\varvec{Q}_S\) is defined as
$$\begin{aligned} H_\mathrm {S}(p_{11}, p_{22}, p_{12})&= \frac{1}{2} ||\varvec{Q}_\mathrm {S} - \varvec{P} ||^2 \nonumber \\&= H(p_{11}, p_{22}, p_{12}) - \frac{1}{4}(q_{12}-q_{21})^2. \end{aligned}$$
(48)
Since \((q_{12}-q_{21})^2\) is independent of \(p_{11}, p_{22}\) and \(p_{12}\), and because we are only interested in the minimizer \((p_{11}, p_{22}, p_{12})\) and not in its value \(H(p_{11}, p_{22}, p_{12})\), we minimize \(H_\mathrm {S}\) instead of H. For each grid point \(\texttt {x}_{ij} = (x_{1,i}, x_{2,j})\in \mathcal {S}\) we have the following quadratic minimization problem
$$\begin{aligned} \text {minimize}\quad&H_\mathrm {S}(p_{11}, p_{22}, p_{12}), \end{aligned}$$
(49a)
$$\begin{aligned} \text {subject to}\quad&\det (\varvec{P}) = \frac{f}{g} \det (\varvec{C}), \end{aligned}$$
(49b)
$$\begin{aligned}&{{\,\mathrm{tr}\,}}(\varvec{P}) \ge 0. \end{aligned}$$
(49c)
This problem can be solved analytically, and we will show that for given \(q_{11}, q_{22}, q_\mathrm {S}\) and f / g there exist at least one and at most four real solutions, see “Appendix A”. From these we have to select the ones that give rise to a negative semi-definite matrix \(\varvec{P}\), and we will also show that this is always possible. Finally, we compare the values of \(H_\mathrm {S}(p_{11}, p_{22}, p_{12})\) to find the global minimum.
The possible minimizers of (49) are obtained introducing the Lagrangian function \(\varLambda \), defined as
$$\begin{aligned} \varLambda (p_{11}, p_{22}, p_{12}; \mu ) = \frac{1}{2} \big \Vert \varvec{Q}_\mathrm {S} - \varvec{P} \big \Vert ^2 + \mu \left( \det (\varvec{P}) -\frac{f}{g} \det (\varvec{C}) \right) , \end{aligned}$$
(50)
where \(\mu \) is the Lagrange multiplier. By setting all partial derivatives of \(\varLambda \) to 0 we find the critical points of \(\varLambda \) and this gives the following algebraic system
$$\begin{aligned} p_{11} + \lambda p_{22}&= q_{11}, \end{aligned}$$
(51a)
$$\begin{aligned} \lambda p_{11} + p_{22}&= q_{22}, \end{aligned}$$
(51b)
$$\begin{aligned} (1 - \lambda )p_{12}&= q_\mathrm {S}, \end{aligned}$$
(51c)
$$\begin{aligned} p_{11} p_{22} - p_{12}^2&= \frac{f}{g}\det (\varvec{C}), \end{aligned}$$
(51d)
where \(\lambda = \mu /\det (\varvec{C})\). The system (51a)–(51c) is linear in \(p_{11}, ~ p_{22}\) and \(p_{12}\), and is regular if \(\lambda \ne \pm 1\) (we discuss the singular cases in the “Appendix A”). In the case when \(\lambda \ne \pm 1\), we calculate the critical points by inverting the system, i.e., we express \(p_{11}, ~ p_{22}\) and \(p_{12}\) in terms of \(\lambda \) as
$$\begin{aligned} p_{11} = \frac{\lambda q_{22} - q_{11}}{\lambda ^2 -1}, \quad p_{22} = \frac{\lambda q_{11} - q_{22}}{\lambda ^2 -1}, \quad p_{12} = \frac{q_\mathrm {S}}{1-\lambda }. \end{aligned}$$
(52)
Substituting these expressions in Eq. (51d) gives the following quartic equation
$$\begin{aligned} F(\lambda )&= a_4 \lambda ^4 + a_2 \lambda ^2 + a_1 \lambda + a_0 = 0, \end{aligned}$$
(53a)
with coefficients given by
$$\begin{aligned} a_4&= \frac{f}{g}\det (\varvec{C}) \ge 0, \end{aligned}$$
(53b)
$$\begin{aligned} a_2&= -2 \frac{f}{g}\det (\varvec{C}) - \det (\varvec{Q}_\mathrm {S}) = -2 a_4 - \det (\varvec{Q}_\mathrm {S}), \end{aligned}$$
(53c)
$$\begin{aligned} a_1&= \Vert \varvec{Q}_\mathrm {S} \Vert ^2 \ge 0, \end{aligned}$$
(53d)
$$\begin{aligned} a_0&= \frac{f}{g}\det (\varvec{C}) -\det (\varvec{Q}_\mathrm {S}) = a_4 - \det (\varvec{Q}_\mathrm {S}). \end{aligned}$$
(53e)
Furthermore, from Eqs. (51a)–(51b) the condition (49c) becomes
$$\begin{aligned} {{\,\mathrm{tr}\,}}(\varvec{P}) = \frac{{{\,\mathrm{tr}\,}}(\varvec{Q}_\mathrm {S})}{1 + \lambda } \ge 0, \end{aligned}$$
(54)
and consequently, we need to select Lagrange multipliers that satisfy the above condition. It can be shown that the quartic Eq. (53) has at least two real roots, and one of them is less than \(-\,1\) and other one is greater than \(-\,1\) (see “Appendix A”). The convexity condition (54) can be satisfied by choosing the appropriate values of \(\lambda \) and \({{\,\mathrm{tr}\,}}(\varvec{Q}_\mathrm {S})\), and the minimizers of \(H_\mathrm {S}\) are given by (52).
Minimizing procedure for \(\varvec{m}\)
In this section, we describe the minimization step (41c). The minimizing procedure for \(\varvec{m}\) differs from the procedure given in [7] because we have an extra matrix \(\varvec{C}\) in the function \(J_\mathrm {I}\) which results in two coupled elliptic equations for the components of the mapping \(\varvec{m}\) instead of decoupled Poisson equations. We assume \(\varvec{P}\) and \(\varvec{b}\) are fixed, and minimize \(J(\varvec{m}, \varvec{P}, \varvec{b})\) over the functions \(\varvec{m}\in \mathcal {M}\) using calculus of variations, i.e., \(\varvec{P}\) and \(\varvec{b}\) are given in all grid points \(\texttt {x}_{ij}\in \mathcal {S}\). We want to compute \(\varvec{m}\) on the grid covering \(\mathcal {S}\). Here, we drop the indices n and \(n+1\), for ease of notation. In the calculations that follow, we use the identity for the Frobenius norm of matrices, i.e.,
$$\begin{aligned} \Vert \varvec{A} + \varvec{B} \Vert ^2= \Vert \varvec{A} \Vert ^2 + 2 \varvec{A}\varvec{:} \varvec{B} + \Vert \varvec{B} \Vert ^2 . \end{aligned}$$
(55)
The first variation of the functional J with respect to \(\varvec{m}\) in the direction \(\varvec{\eta }\in [C^2(\mathcal {S})]^2\) is given by
$$\begin{aligned} \begin{aligned} \delta J(\varvec{m}, \varvec{P}, \varvec{b})[\varvec{\eta }] =&\lim _{\epsilon \rightarrow 0} \frac{1}{\epsilon } \big [J(\varvec{m} + \epsilon \varvec{\eta }, \varvec{P}, \varvec{b}) - J(\varvec{m}, \varvec{P}, \varvec{b})\big ] \\ =&\lim _{\epsilon \rightarrow 0} \bigg [ \frac{\alpha }{2}\iint _\mathcal {S} 2 (\varvec{C} \mathrm {D}\varvec{m} - \varvec{P})\varvec{:} \varvec{C} \mathrm {D}\varvec{\eta } + \epsilon \Vert \varvec{C} \mathrm {D}\varvec{\eta }\Vert ^2 \mathrm {d}{} \texttt {x} \\&+ \frac{1-\alpha }{2}\oint _{\partial \mathcal {S}} 2 (\varvec{m} - \varvec{b})\varvec{\cdot }\varvec{\eta } + \epsilon \Vert \varvec{\eta }\Vert ^2 \mathrm {d}s \bigg ] \\ =&\,\, \alpha \iint _\mathcal {S} (\varvec{C} \mathrm {D}\varvec{m} - \varvec{P})\varvec{:} \varvec{C} \mathrm {D}\varvec{\eta } \mathrm {d}{} \texttt {x} + (1-\alpha )\oint _{\partial \mathcal {S}} (\varvec{m} - \varvec{b})\varvec{\cdot }\varvec{\eta } \mathrm {d}s. \end{aligned} \end{aligned}$$
(56)
The minimizer is obtained by setting the variation equal to 0, i.e.,
$$\begin{aligned} \delta J(\varvec{m}, \varvec{P}, \varvec{b})[\varvec{\eta }] = 0, \quad \forall \varvec{\eta }\in [C^2(\mathcal {S})]^2. \end{aligned}$$
(57)
Let us define the column vectors \(\varvec{p}_1, \varvec{p}_2, \varvec{c}_1\) and \(\varvec{c}_2\) as
$$\begin{aligned} \varvec{P} = [\varvec{p}_1 ~ \varvec{p}_2], ~\varvec{C} = [\varvec{c}_1 ~ \varvec{c}_2], ~ \varvec{p}_i = \begin{pmatrix} p_{1i} \\ p_{2i} \end{pmatrix}, ~ \varvec{c}_i = \begin{pmatrix} c_{1i} \\ c_{2i} \end{pmatrix}, \quad i = 1,2. \end{aligned}$$
(58)
We can split the first integrand of the final expression in (56) as follows
$$\begin{aligned} \begin{aligned} (\varvec{C} \mathrm {D}\varvec{m} - \varvec{P})\varvec{:} \varvec{C} \mathrm {D}\varvec{\eta }&= \varvec{C}^T(\varvec{C} \mathrm {D}\varvec{m} - \varvec{P})\varvec{:} \mathrm {D}\varvec{\eta }\\&= \sum _{k=1}^2 \varvec{C}^T\Big (\varvec{C} \frac{\partial \varvec{m}}{\partial x_k} - \varvec{p}_k \Big )\varvec{\cdot } \frac{\partial \varvec{\eta }}{\partial x_k} \\&= \varvec{v}_1 \varvec{\cdot } \frac{\partial \varvec{\eta }}{\partial x_1} + \varvec{v}_2 \varvec{\cdot } \frac{\partial \varvec{\eta }}{\partial x_2}, \end{aligned} \end{aligned}$$
(59)
where the vectors \(\varvec{v}_1\) and \(\varvec{v}_2\) are column vectors of the matrix \(\varvec{V} = [\varvec{v}_1, \varvec{v}_2]\), given by
$$\begin{aligned} \varvec{v}_1 = \begin{pmatrix} v_{11} \\ v_{21} \end{pmatrix} = \varvec{C}^T\Big (\varvec{C} \frac{\partial \varvec{m}}{\partial x_1} - \varvec{p}_1 \Big ), \quad \varvec{v}_2 = \begin{pmatrix} v_{12} \\ v_{22} \end{pmatrix} = \varvec{C}^T\Big (\varvec{C} \frac{\partial \varvec{m}}{\partial x_2} - \varvec{p}_2 \Big ), \end{aligned}$$
and by defining \(\varvec{W} = \varvec{V}^T = [\varvec{w}_1, \varvec{w}_2]\), we can rewrite the first integral of the final expression in (56) as
$$\begin{aligned} \iint _\mathcal {S} (\varvec{C} \mathrm {D}\varvec{m} - \varvec{P})\varvec{:} \varvec{C} \mathrm {D}\varvec{\eta } \mathrm {d}{} \texttt {x} = \sum _{k=1}^2 \iint _\mathcal {S} \varvec{w}_k \varvec{\cdot } \nabla \eta _k\mathrm {d}{} \texttt {x}. \end{aligned}$$
(60)
Let \({\hat{\varvec{n}}}\) denote the unit outward normal at the boundary \(\partial \mathcal {S}\). Using the vector-scalar product rule [33, p. 576] and the identity
$$\begin{aligned} \iint _{\mathcal {S}} \nabla v \varvec{\cdot } \varvec{F} + v \nabla \varvec{\cdot } \varvec{F} \mathrm {d}{} \texttt {x}=\oint _{\partial \mathcal {S}} v \varvec{F}\varvec{\cdot } {\hat{\varvec{n}}} \mathrm {d}s, \end{aligned}$$
(61)
derived from the Gauss’s theorem, the integrals in the rhs of (56) become
$$\begin{aligned} \iint _\mathcal {S} \varvec{w}_k \varvec{\cdot } \nabla \eta _k\mathrm {d}{} \texttt {x} = \oint _{\partial \mathcal {S}} \varvec{w}_k \varvec{\cdot } \hat{\varvec{n}} \eta _k \mathrm {d}s - \iint _\mathcal {S} \nabla \varvec{\cdot } \varvec{w}_k \eta _k \mathrm {d}{} \texttt {x} . \end{aligned}$$
(62)
Substituting the integral in the final expression of (56), the minimizer can be obtained from the following relation
$$\begin{aligned} \sum _{k=1}^2 \Big ( \oint _{\partial \mathcal {S}} \big ( \alpha \varvec{w}_k \varvec{\cdot } \hat{\varvec{n}} + (1-\alpha )(m_k -b_k) \big )\eta _k \mathrm {d}s -\alpha \iint _\mathcal {S} \nabla \varvec{\cdot } \varvec{w}_k \eta _k \mathrm {d}{} \texttt {x} \Big ) = 0 \quad \forall \varvec{\eta }\in [C^2(\mathcal {S})]^2, \end{aligned}$$
(63)
where \(m_k\) and \(b_k\)\((k= 1,2)\) are the components of the vectors \(\varvec{m}\) and \(\varvec{b}\), respectively. Choosing \(\eta _2 = 0\) and applying the fundamental lemma of calculus of variations [34, p. 15] for \(\eta _1\), we find that \(m_1\) and \(m_2\) satisfies, almost everywhere, the equation
$$\begin{aligned}&\frac{\partial }{\partial x_1} \bigg [|\varvec{c}_1|^2 \frac{\partial m_1}{\partial x_1} + \varvec{c}_1\varvec{\cdot }\varvec{c}_2 \frac{\partial m_2}{\partial x_1} \bigg ] + \frac{\partial }{\partial x_2} \bigg [|\varvec{c}_1|^2 \frac{\partial m_1}{\partial x_2} + \varvec{c}_1\varvec{\cdot }\varvec{c}_2 \frac{\partial m_2}{\partial x_2} \bigg ] \nonumber \\&\quad = \frac{\partial }{\partial x_1}(\varvec{c}_1\varvec{\cdot } \varvec{p}_1) + \frac{\partial }{\partial x_2}(\varvec{c}_1\varvec{\cdot } \varvec{p}_2) \quad \texttt {x}\in \mathcal {S}, \end{aligned}$$
(64a)
$$\begin{aligned}&(1-\alpha )m_1 + \alpha ( |\varvec{c}_1|^2 \nabla m_1 \varvec{\cdot }\hat{\varvec{n}} + \varvec{c}_1\varvec{\cdot }\varvec{c}_2\nabla m_2 \varvec{\cdot }\hat{\varvec{n}}) = (1-\alpha )b_1 + \alpha \varvec{c}_1 \varvec{\cdot } \varvec{P}\hat{\varvec{n}} \quad \texttt {x}\in \partial \mathcal {S}. \end{aligned}$$
(64b)
Similarly, choosing \(\eta _1 = 0\) and applying the fundamental lemma of calculus of variations for \(\eta _2\), we obtain
$$\begin{aligned}&\frac{\partial }{\partial x_1} \bigg [\varvec{c}_1\varvec{\cdot }\varvec{c}_2 \frac{\partial m_1}{\partial x_1} + |\varvec{c}_2|^2 \frac{\partial m_2}{\partial x_1} \bigg ] + \frac{\partial }{\partial x_2} \bigg [\varvec{c}_1\varvec{\cdot }\varvec{c}_2 \frac{\partial m_1}{\partial x_2} + |\varvec{c}_2|^2 \frac{\partial m_2}{\partial x_2} \bigg ] \nonumber \\&\quad = \frac{\partial }{\partial x_1}(\varvec{c}_2\varvec{\cdot } \varvec{p}_1) + \frac{\partial }{\partial x_2}(\varvec{c}_2\varvec{\cdot } \varvec{p}_2) \quad \texttt {x}\in \mathcal {S}, \end{aligned}$$
(65a)
$$\begin{aligned}&(1-\alpha )m_2 + \alpha ( \varvec{c}_1\varvec{\cdot }\varvec{c}_2 \nabla m_1 \varvec{\cdot }\hat{\varvec{n}} + |\varvec{c}_2|^2 \nabla m_2 \varvec{\cdot }\hat{\varvec{n}}) = (1-\alpha )b_2 + \alpha \varvec{c}_2\varvec{\cdot } \varvec{P}\hat{\varvec{n}} \quad \texttt {x}\in \partial \mathcal {S}. \end{aligned}$$
(65b)
We can rewrite these equations as follows in matrix-vector form
$$\begin{aligned} \nabla \cdot (\varvec{C}^T\varvec{C} \mathrm {D}\varvec{m})&= \nabla \cdot (\varvec{C}^T \varvec{P}), \quad \quad \quad \quad \quad \texttt {x}\in \mathcal {S}, \end{aligned}$$
(66a)
$$\begin{aligned} (1-\alpha )\varvec{m} + \alpha (\varvec{C}^T\varvec{C} \nabla \varvec{m})\cdot \hat{\varvec{n}}&= (1-\alpha )\varvec{b} +\alpha \varvec{C}\cdot \varvec{P} \hat{\varvec{n}}, \quad \texttt {x}\in \partial \mathcal {S}. \end{aligned}$$
(66b)
These are two coupled elliptic equations with Robin boundary conditions for the two components \(m_1\) and \(m_2\) of the mapping \(\varvec{m}\) [35, p. 160]. The above equations are in divergence form which motivates us to discretize the equations using the finite volume method [35, p. 84–88], for more details see “Appendix B”.

3.2 Second Stage: Calculation of the Freeform Surfaces

We compute the lens surfaces assuming that a numerical approximation of \(\varvec{m}\) on the grid is available. We compute the first lens surface \(u_1(\texttt {x})\) from the converged mapping \(\varvec{m}\) using relation (26) in the least-squares senses, i.e.,
$$\begin{aligned} u_1(\texttt {x}) = \mathop {\text {argmin}}\limits _{\phi } I(\phi ), \quad I(\phi ) = \frac{1}{2}\iint _\mathcal {S} |\nabla \phi (\texttt {x}) - \nabla _\texttt {x} c(\texttt {x},\varvec{m}(\texttt {x}))|^2 \mathrm {d}{} \texttt {x}, \quad \forall ~\phi \in \mathrm {C}^1(\mathcal {S}). \end{aligned}$$
(67)
We calculate the minimizing function \(u_1(\texttt {x})\) using calculus of variations. The first variation of the functional I in (67) in a direction v is given by
$$\begin{aligned} \begin{aligned} \delta I(u_1)[v]&= \lim _{\epsilon \rightarrow 0} \frac{1}{\epsilon } \big [ I(u_1 + \epsilon v) - I(u_1) \big ] \\&= \lim _{\epsilon \rightarrow 0} \frac{1}{2}\bigg [ \iint _\mathcal {S} \epsilon |\nabla v|^2 + 2 (\nabla u_1 - \nabla _\texttt {x} c) \varvec{\cdot } \nabla v \mathrm {d}{} \texttt {x} \bigg ] \\&= \iint _\mathcal {S} (\nabla u_1 - \nabla _\texttt {x} c) \varvec{\cdot } \nabla v \mathrm {d}{} \texttt {x}. \end{aligned} \end{aligned}$$
(68)
The minimizer is given by
$$\begin{aligned} \delta I(u_1)[v] = 0, \quad \forall ~ v\in C^1(\mathcal {S}). \end{aligned}$$
(69)
Using the Gauss’s identity (61), we conclude from (69) that
$$\begin{aligned} \oint _{\partial \mathcal {S}} v (\nabla u_1 - \nabla _\texttt {x} c) \varvec{\cdot } {\hat{\varvec{n}}} \mathrm {d}s - \iint _\mathcal {S} v (\varDelta u_1 - \nabla \varvec{\cdot }\nabla _\texttt {x} c) \mathrm {d}{} \texttt {x} = 0, \quad \forall ~ v\in C^1(\mathcal {S}). \end{aligned}$$
(70)
Applying the fundamental lemma of calculus of variations [34, p. 15], we find
$$\begin{aligned} \varDelta u_1&= \nabla \varvec{\cdot }\nabla _\texttt {x} c(\texttt {x}, \varvec{m}), \quad \texttt {x}\in \mathcal {S}, ~ \end{aligned}$$
(71a)
$$\begin{aligned} \nabla u_1 \varvec{\cdot } {\hat{\varvec{n}}}&= \nabla _\texttt {x} c \varvec{\cdot } {\hat{\varvec{n}}} , \quad \texttt {x}\in \partial \mathcal {S}. \end{aligned}$$
(71b)
This is a Neumann problem, and only has a solution if the compatibility condition is satisfied, which reads
$$\begin{aligned} \iint _\mathcal {S} \nabla \varvec{\cdot }\nabla _\texttt {x} c\mathrm {d}{} \texttt {x} - \oint _{\partial \mathcal {S}} \nabla _\texttt {x} c \varvec{\cdot } {\hat{\varvec{n}}} \mathrm {d}s = 0 . \end{aligned}$$
(72)
By Gauss’s theorem, this is satisfied automatically. The solution of the Poisson equation with Neumann boundary condition is unique up to an additive constant. To make the solution unique, we have added the constraint \(u_1(x_1, x_2) = 1\) at the first discretized left most corner point. We solve this problem using standard finite differences, and the discretized system is solved in Matlab using LU decomposition. The second lens surface is calculated from the relation (19), by substituting the converged mapping \(\varvec{m}(\texttt {x})\) and the first lens surface \(u_1(\texttt {x})\), thus we have
$$\begin{aligned} u_2(\varvec{m}(\texttt {x})) = c(\texttt {x}, \varvec{m}(\texttt {x})) - u_1(\texttt {x}) \quad \forall \texttt {x}\in \mathcal {S} . \end{aligned}$$
(73)
The numerical algorithm is summarized as follows. We start the minimization procedure using the initial guess \(\varvec{m}^0\) given by (42) for the discretized source domain \(\mathcal {S}\). Subsequently, we iteratively perform the steps given by (41a), (41b) and (41c). The first and second steps are minimization procedure for \(\varvec{b}\) and \(\varvec{P}\), respectively, and both are performed pointwise. The third step is a minimization procedure for the mapping \(\varvec{m}\) and is performed by solving two coupled elliptic boundary value problems given by (64) and (65). Next, we update the matrix \(\varvec{C}\) given by (27). Finally, after convergence of the iteration (41), the first lens surface is computed from the mapping \(\varvec{m}\) by solving Poisson problem (71), and the second lens surface is computed from relation (73).

4 Numerical Results

We apply the algorithm to four test problems to compute c-convex lens surfaces: first, we map a square with uniform emittance into a circle with uniform illuminance, second, we map an ellipse with uniform emittance into another ellipse with uniform illuminance, third, we map a square with uniform emittance into a non-convex (flower) target distribution, and finally, we challenge our algorithm to map the same distribution into a light pattern given by a picture on the target screen. The numerical results are verified by our self-built ray tracer based on the Monte–Carlo method [2].

4.1 From a Square to a Circle

In the first test problem, we design an optical system of lens surfaces that transforms the uniform emittance of a square into a circle with uniform illuminance. The source domain is given by \(\mathcal {S} = [-1, 1]\times [-1, 1]\) and the target domain by \(\mathcal {T} = \lbrace \texttt {y}\in \mathbb {R}^2 ~\big |\quad ||\texttt {y}||_2 \le 1 \rbrace \). The light source emits a parallel beam of light rays with uniform emittance, i.e.,
$$\begin{aligned} f(\texttt {x}) = \frac{1}{4} \quad \forall ~ \texttt {x}\in \mathcal {S}. \end{aligned}$$
(74)
The target plane is at a distance \(\ell = 40\) from the source plane, and we have fixed the refractive index \(n = 1.5\) and the reduced optical path length \(\beta = 3\pi \) for all numerical problems. The target \(\mathcal {T}\) is illuminated by a parallel beam of light rays with uniform illuminance, i.e.,
$$\begin{aligned} g(\texttt {y}) = {\left\{ \begin{array}{ll} \frac{1}{\pi } \quad \text {if}\quad \texttt {y}\in \mathcal {T}, \\ 0 \quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(75)
Note that the energy conservation condition (20) is satisfied. We discretize the source domain \(\mathcal {S}\) uniformly with \(200\times 200\) grid points. We have a different grid for the boundary, and found from various experiments that the number of boundary grid points \(N_\mathrm {b}\) does not influence the convergence of the algorithm if it is chosen large enough. Since a large value of \(N_\mathrm {b}\) does not significantly increase the calculation time, we have chosen \(N_\mathrm {b} = 1000\). We also observed from various experiments that \(\alpha = 0.65\) is a good choice for \(\alpha \) to have residuals in \(J_\mathrm {I}\) and \(J_\mathrm {B}\) close together. Choosing \(\alpha \) too large or too small slows down convergence. We stopped the algorithm after 200 iterations because \(J_\mathrm {I}\) and \(J_\mathrm {B}\) stall. The resulting mapping after 200 iterations is shown in Fig. 2a, and the convergence history of the algorithm is shown in Fig. 2b. The algorithm performed efficiently, the boundary and interior functionals for the circular target have converged well with residuals of approximately \(2.35\times 10^{-7}\).

4.2 From an Ellipse to Another Ellipse

In the second test case, we apply the algorithm to map a uniform emittance of an ellipse into another ellipse with uniform illuminance. The source domain is given by \(\mathcal {S} = \lbrace (x_1,x_2)\in \mathbb {R}^2 ~\big |\quad 4x_1^2 + x_2^2 \le 4 \rbrace \), see Fig. 3a, and the target domain by \(\mathcal {T} = \lbrace (y_1,y_2)\in \mathbb {R}^2 ~\big |\quad y_1^2 + 4y_2^2 \le 4 \rbrace \), i.e., rotate an ellipse distribution to another ellipse over \(\pi /2\). We have \(f(x_1,x_2) = g(y_1,y_2) = 1/4\pi \). We use a \(200\times 200\) grid with 1000 points on the boundary, the reduced optical path length \(\beta = 3\pi \), and the weight parameter \(\alpha = 0.65\). The mapping after 200 iterations is shown in Fig. 3b, and the source distribution in grid format shown in Fig. 3a. The algorithm exhibits almost the same convergence as shown in Fig. 2b.

4.3 From a Square to a Non-convex (Flower) Target

In the third test case, we test the algorithm for non-convex flower shaped targets. We apply the algorithm to map a uniform emittance of a square into uniformly illuminated non-convex targets. The source domain is given by the square \([-1, 1]\times [-1, 1]\) with \(f(x_1,x_2) = 1/4\), and the target domain is defined in polar coordinates as
$$\begin{aligned} \rho (\theta ) = 1 + e \cos (6\theta ), \quad 0\le \theta < 2\pi , \end{aligned}$$
(76)
where \(\rho (\theta )\) is the distance to the origin and \(\theta \) is the counterclockwise angle with respect to the \(y_1\)-axis in the target plane. We test the algorithm for the four values \(e\in \lbrace 0.1, 0.15, 0.20, 0.25 \rbrace \) which represent the deviation of the target domain from a convex shape. We use \(200\times 200\) grid with 1000 points on the boundary, the reduced optical path length \(\beta = 3\pi \), and the weight parameter \(\alpha = 0.50\). The mappings after 250 iterations are shown in Fig. 4. The residual J after 250 iterations is \(2.73\times 10^{-7}\), \(7.05\times 10^{-6}\), \(3.93\times 10^{-5}\) and \(9.99\times 10^{-5}\), respectively. The convergence problem arises for target domains which strongly deviate from a convex shape, but if the shape deviates mildly from convex, the algorithm performs satisfactorily, see Fig. 4a–d.

4.4 From a Square to a Picture

The fourth test problem is to design an optical system of lens surfaces which transforms a square uniform bundle of parallel light rays into a target distribution corresponding to a given picture. Here, we challenge our algorithm for a picture showing costumes of the Indian classical dance Bharatanatyam, see Fig. 5. The emittance of the light source is again the same as defined in (74) and the parameters of the optical system are also the same as defined in Sect. 4.1. The desired target illumination \(g(y_1, y_2)\) is given by the grayscale test image shown in Fig. 5. Because the target distribution contains many details, e.g., the pattern of costumes and jewellery, it provides a challenging test for our algorithm.
Note that the picture is converted into grayscale and contains some black regions, which results in \(g(y_1, y_2) =0\) for some \((y_1, y_2)\in \mathcal {T}\). This would give division by 0 in the least-squares algorithm. Therefore, we have increased the illuminance to \(5\%\) of the maximum value if it is less than this threshold value. We discretized the source \(\mathcal {S}\) on a \(500\times 500\) grid, with 1000 boundary points. The convergence history of the algorithm is shown in Fig. 6b for \(\alpha = 0.70\). We stopped the algorithm after 150 iterations, because \(J_\mathrm {I}\) and \(J_\mathrm {B}\) did no longer seem to decrease. The resulting mapping is shown in Fig. 6a, the image details can be recognized in the grid. The optical system is verified using the ray tracing algorithm [2]. We ran our ray tracing algorithm for 10 million uniformly distributed random points on the source to compute the actual illumination pattern produced on the target. The target illuminance for 10 million rays is plotted in the Fig. 5. The output images is very close to the corresponding original image, although the image is slightly blurred, but even complex details can be identified. The functions \(u_1(\texttt {x})\) and \(u_2(\texttt {y})\) representing the freeform surfaces \(\mathcal {L}_1\) and \(\mathcal {L}_2\) of the lens, respectively, are shown in Fig. 7. The lens surfaces are convex on their respective domains and an alternative representation of the mapping can be seen as contour of grids on the second lens surface.

5 Discussion and Conclusion

We introduced a least-squares method to compute freeform surfaces of an optical system corresponding to a non-quadratic cost function. The method is an extended version of the least-squares method, earlier introduced in [7]. Furthermore, we presented a new generic (in term of cost function) minimization procedure of \(\varvec{P}\) for the functional \(J_\mathrm {I}\). Moreover, we have shown that the minimization procedure of the mapping \(\varvec{m}\) for the total functional J consists of coupled elliptic PDEs.
We presented the extended least-squares method to compute coupled freeform surfaces of a lens. Our method can compute freeform surfaces of any optical system corresponding to a twice continuously differential cost function, which demonstrates the wider applicability of the method. The ELS-method also shows good performance for a non-convex target domain: as long as the domain does not deviate too much from a convex shape.
The algorithm is very time and memory efficient, and provides both convex and concave optical surfaces which makes it very suitable to use for these type of problems. Furthermore, we have applied the method to a very challenging problem containing the details of the costumes of the Indian classical dance Bharatanatyam and obtained a high resolution, preserving details of the original picture.
In future work we would like to apply the algorithm to more complex cost functions, e.g., point light sources and far-field problems. Also, we would to explore the applicability of the Monge–Ampère solver in other fields of science and engineering.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendices

Solution of the Quartic Equation

We obtain four possible solutions of the quartic equation (53) using Ferrari’s method [36, p. 32]. The key idea is to rewrite the quartic equation as two quadratic equations, and by solving both we get solutions of the quartic equation. For detailed solution of the quadratic equations we refer to [7].
Solution of (53) when\(\pmb {f>0}\).
It can be shown that the problem (53) has at least two real roots. For the real symmetric matrix \(\varvec{Q}_\mathrm {S}\), we can deduce
$$\begin{aligned} {{\,\mathrm{tr}\,}}(\varvec{Q}_\mathrm {S})^2 - 4 \det (\varvec{Q}_\mathrm {S}) = (q_{11} - q_{22})^2 + 4 q_{\mathrm {S}}^2 \ge 0, \end{aligned}$$
(77)
and using the above relation, we conclude that \(F(-1) = -{{\,\mathrm{tr}\,}}(\varvec{Q}_\mathrm {S})^2 < 0\) and \(F(1)= {{\,\mathrm{tr}\,}}(\varvec{Q}_\mathrm {S})^2 - 4 \det (\varvec{Q}_\mathrm {S}) \ge 0\), and the coefficient of \(\lambda ^4\) in the quartic equation (53) is positive. Which imply that (53) has at least two real roots, more precisely one of them is less than \(-1\) and other one is greater than \(-1\). The solution of (53) are given by
$$\begin{aligned} \lambda _1&= -\sqrt{\frac{y}{2}} + \sqrt{ -\frac{y}{2} - \frac{a_2}{2a_4} + \frac{a_1}{2a_4\sqrt{2y}} } , \end{aligned}$$
(78a)
$$\begin{aligned} \lambda _2&= -\sqrt{\frac{y}{2}} - \sqrt{ -\frac{y}{2} - \frac{a_2}{2a_4} + \frac{a_1}{2a_4\sqrt{2y}} } , \end{aligned}$$
(78b)
$$\begin{aligned} \lambda _3&= \sqrt{\frac{y}{2}} + \sqrt{ -\frac{y}{2} - \frac{a_2}{2a_4} - \frac{a_1}{2a_4\sqrt{2y}} } , \end{aligned}$$
(78c)
$$\begin{aligned} \lambda _4&= \sqrt{\frac{y}{2}} - \sqrt{ -\frac{y}{2} - \frac{a_2}{2a_4} - \frac{a_1}{2a_4\sqrt{2y}} } , \end{aligned}$$
(78d)
where y is the solution of a cubic equation in the Ferrari’s method. The real roots satisfying the convexity condition (54) are substituted in (52) and (49), yielding the possible minimizers of \(H_\mathrm {S}(p_{11}, p_{22}, p_{12})\). Note that we have division by zero in (78) if \(y = 0\). We find that this happens only when \(a_1 = 0\), i.e., \( q_{11} = q_{22} = q_\mathrm {S} = 0\). This is a special case which corresponds to the possibility \(\lambda = 1\), which we discuss later.
Solution of (53) when\( \pmb {f = 0}\).
If the source density \(f = 0\), the quartic equation (53) reduced to a quadratic equation because \(a_4 = 0\). The solutions are obtained by solving the corresponding quadratic equation, and the roots are given by
$$\begin{aligned} \lambda = \frac{-a_1 \pm \sqrt{a_1^2 -4a_2 a_0}}{2a_2}. \end{aligned}$$
(79)
We can verify that the discriminant of this quadratic equation is always positive by substituting the coefficients in (53) in the discriminant, it becomes
$$\begin{aligned} a_1^2 -4a_2 a_0 = (q_{11}^2 -q_{22})^2 + 4q_\mathrm {S}^2(q_{11}^2 + q_{22}^2)^2 \ge 0. \end{aligned}$$
(80)
Furthermore, also in this case \(F(-1) = -{{\,\mathrm{tr}\,}}(\varvec{Q}_\mathrm {S})^2 < 0\) and \(F(1)= {{\,\mathrm{tr}\,}}(\varvec{Q}_\mathrm {S})^2 - 4 \det (\varvec{Q}_\mathrm {S}) \ge 0\), and consequently it shows that (53) has at least one solution \(\lambda > -1\).
Solution of (53) when\(\pmb {\lambda = 1}\).
If \(\lambda = 1\), i.e., \( q_{11} = q_{22}\) and \(q_\mathrm {S} = 0\), and in this case, we cannot invert the system (51a)–(51c) for \((p_{11}, p_{22}, p_{12})\). Therefore we determine the minimum of \(H_\mathrm {S}(p_{11}, p_{22}, p_{12})\) as follows. Using \(q_\mathrm {S} = 0\) and \( q_{11} = q_{22}\), the minimization (49) simplifies to
$$\begin{aligned} \underset{(p_{11}, p_{22}, p_{12}) \in \mathbb {R}^3}{{{\,\mathrm{argmin}\,}}} \frac{1}{2}\big ( (p_{11} - q_{11})^2 + 2p_{12}^2 + (p_{22} - q_{11})^2 \big ), \end{aligned}$$
(81)
with the conditions (49b) and (49c). By solving this minimization problem we obtained the following four solutions
$$\begin{aligned} p_{11}&= p_{22} = \frac{q_{11}}{2}, \quad p_{12} = \pm \sqrt{\frac{q_{11}^2}{4} - \frac{f}{g} \det (\varvec{C}) }, \end{aligned}$$
(82a)
or
$$\begin{aligned} p_{11}&= p_{22} = \pm \sqrt{\frac{f}{g} \det (\varvec{C})}, \quad p_{12} = 0. \end{aligned}$$
(82b)
For the detailed solution see [7]. Also there exists at least one solution which satisfies the convexity condition \({{\,\mathrm{tr}\,}}(\varvec{P})\ge 0\).
Solution of (53) when\(\pmb {\lambda = -1}\).
If \(\lambda = -1\), i.e., \( q_{11} = -q_{22}\). In this case again we can not invert the system (51a)–(51c) for \((p_{11}, p_{22}, p_{12})\). We determine the minimizers using a different method. From (51a) and (51b), we find
$$\begin{aligned} p_{22} = p_{11} - q_{11}, \quad p_{12} = q_\mathrm {S}/2. \end{aligned}$$
(83)
Substituting these in (51d), we conclude
$$\begin{aligned} p_{11}^2 - q_{11} p_{11} - \frac{q_\mathrm {S}^2}{4} - \frac{f}{g} \det (\varvec{C}) = 0. \end{aligned}$$
(84)
Solving for \(p_{11}\), we find two solutions:
$$\begin{aligned} p_{11}&= \frac{q_{11}}{2} + \frac{\sqrt{q_{11}^2 + q_\mathrm {S}^2 + 4\det (\varvec{C})f/g}}{2} , \end{aligned}$$
(85a)
$$\begin{aligned} p_{12}&= \frac{q_\mathrm {S}}{2} , \end{aligned}$$
(85b)
$$\begin{aligned} p_{22}&= -\frac{q_{11}}{2} + \frac{\sqrt{q_{11}^2 + q_\mathrm {S}^2 + 4\det (\varvec{C})f/g}}{2}, \end{aligned}$$
(85c)
and
$$\begin{aligned} p_{11}&= \frac{q_{11}}{2} - \frac{\sqrt{q_{11}^2 + q_\mathrm {S}^2 + 4\det (\varvec{C})f/g}}{2} , \end{aligned}$$
(86a)
$$\begin{aligned} p_{12}&= \frac{q_\mathrm {S}}{2} , \end{aligned}$$
(86b)
$$\begin{aligned} p_{22}&= -\frac{q_{11}}{2} - \frac{\sqrt{q_{11}^2 + q_\mathrm {S}^2 + 4\det (\varvec{C})f/g}}{2}, \end{aligned}$$
(86c)
which are always real. The second solution satisfies the convexity condition \({{\,\mathrm{tr}\,}}(\varvec{P})\ge 0\).

Finite Volume Discretisation of the Coupled Elliptic Equations (66)

We can write the differential equation (64a) as
$$\begin{aligned} \frac{\partial f_{11}}{\partial x_1} + \frac{\partial f_{12}}{\partial x_2} = \frac{\partial r_{11}}{\partial x_1} + \frac{\partial r_{12}}{\partial x_2}, \end{aligned}$$
(87)
where
$$\begin{aligned} f_{11}&= |\varvec{c}_1|^2 \frac{\partial m_1}{\partial x_1} + \varvec{c}_1\varvec{\cdot }\varvec{c}_2 \frac{\partial m_2}{\partial x_1} ,\quad r_{11} = \varvec{c}_1\varvec{\cdot } \varvec{p}_1, \\ f_{12}&= |\varvec{c}_1|^2 \frac{\partial m_1}{\partial x_2} + \varvec{c}_1\varvec{\cdot }\varvec{c}_2 \frac{\partial m_2}{\partial x_2} ,\quad r_{12} = \varvec{c}_1\varvec{\cdot } \varvec{p}_2 . \end{aligned}$$
The above equation can be written in the divergence form as
$$\begin{aligned} \nabla \varvec{\cdot }\varvec{f}_1 = \nabla \varvec{\cdot }\varvec{r}_1, \end{aligned}$$
(88)
where, \(\varvec{f}_1 = (f_{11}, f_{12})^T\) and \(\varvec{r}_1 = (r_{11}, r_{12})^T\). Integrating Eq. (88) over each \(\mathcal {A}\subset \mathcal {S}\) and using Gauss’s theorem [33, p. 925], we obtain
$$\begin{aligned} \oint _{\partial \mathcal {A}} \varvec{f}_1\varvec{\cdot } \hat{\varvec{n}} \mathrm {d}s = \oint _{\partial \mathcal {A}} \varvec{r}_1\varvec{\cdot } \hat{\varvec{n}} \mathrm {d}s, \end{aligned}$$
(89)
where \(\hat{\varvec{n}}\) is the unit outward normal on the boundary \(\partial \mathcal {A}\) of \(\mathcal {A}\). Now, we create a set of non-overlapping control volumes for the computational grid of the domain \(\mathcal {S}\) and apply the cell-centred finite volume method, i.e., grid points are located in the centre of the control volume.
Let us consider the control volume \(\mathcal {A}\equiv \varOmega _\mathrm {C} = [x_{1,w}, x_{1,e}]\times [x_{2,s}, x_{2,n}]\) as shown in Fig. 8, where \(x_{1,w}\) is the \(x_1\)-value at centre of the western cell face \(\varGamma _w\), i.e., \(x_{1,w} = x_{1, i-1/2}\) and approximated as \(x_{1,w} = \big (x_1(W) + x_1(C)\big )/2\), etc, and \(x_1(C) = x_{1,i}\), \(x_1(W) = x_{1,i-1}\), etc. The finite volume method is used to transform equation (89) to a system of discrete equations for the centre point \(\mathrm {C}\) of the control volume \(\varOmega _\mathrm {C}\). First, Eq. (89) is applied for the control volume \(\varOmega _\mathrm {C}\). This reduces the equation to one involving only first derivatives. These first order derivatives are replaced with central difference approximations, for more details see [35]. Finally, the integral equation (89) can be descretized as follows
$$\begin{aligned} \begin{aligned}&a_E m_1(E) + a_W m_1(W) + a_N m_1(N) + a_S m_1(S) + a_C m_1(C) \\&\quad +b_E m_2(E) + b_W m_2(W) + b_N m_2(N) + b_S m_2(S) + b_C m_2(C) = r_1(C), \end{aligned} \end{aligned}$$
(90)
where
$$\begin{aligned} a_E =&\frac{|\varvec{c}_1|_e^2}{h_1^2}, \quad a_W = \frac{|\varvec{c}_1|_w^2}{h_1^2}, \quad a_N = \frac{|\varvec{c}_1|_n^2}{h_2^2}, \quad a_S = \frac{|\varvec{c}_1|_s^2}{h_2^2}, \\ b_E =&\frac{( \varvec{c}_1\varvec{\cdot }\varvec{c}_2)_e^2}{h_1^2}, \quad b_W = \frac{( \varvec{c}_1\varvec{\cdot }\varvec{c}_2)_w^2}{h_1^2}, \quad b_N = \frac{( \varvec{c}_1\varvec{\cdot }\varvec{c}_2)_n^2}{h_2^2}, \quad b_S = \frac{( \varvec{c}_1\varvec{\cdot }\varvec{c}_2)_s^2}{h_2^2}, \\ a_C =&-(a_E+a_W+a_N+a_S), \quad b_C = -(b_E+b_W+b_N+b_S), \\ r_1(C) =&\frac{1}{h_1}\big [(\varvec{c}_1\varvec{\cdot } \varvec{p}_1)_e - (\varvec{c}_1\varvec{\cdot } \varvec{p}_1)_w\big ] + \frac{1}{h_2}\big [(\varvec{c}_1\varvec{\cdot } \varvec{p}_2)_n - (\varvec{c}_1\varvec{\cdot } \varvec{p}_2)_s\big ]. \end{aligned}$$
Similarly, the discrete form of Eq. (65a) is
$$\begin{aligned} \begin{aligned}&b_E m_1(E) + b_W m_1(W) + b_N m_1(N) + b_S m_1(S) + b_C m_1(C) \\&\quad +d_E m_2(E) + d_W m_2(W) + d_N m_2(N) + d_S m_2(S) + d_C m_2(C) = r_2(C), \end{aligned} \end{aligned}$$
(91)
where
$$\begin{aligned} d_E =&\frac{|\varvec{c}_2|_e^2}{h_1^2}, ~ d_W = \frac{|\varvec{c}_2|_w^2}{h_1^2}, ~ d_N = \frac{|\varvec{c}_2|_n^2}{h_2^2}, ~ d_S = \frac{|\varvec{c}_2|_s^2}{h_2^2}, \\ d_C =&-(d_E+d_W+d_N+d_S), \\ r_2(C) =&\frac{1}{h_1}\big [(\varvec{c}_2\varvec{\cdot } \varvec{p}_1)_e - (\varvec{c}_2\varvec{\cdot } \varvec{p}_1)_w\big ] + \frac{1}{h_2}\big [(\varvec{c}_2\varvec{\cdot } \varvec{p}_2)_n - (\varvec{c}_2\varvec{\cdot } \varvec{p}_2)_s\big ]. \end{aligned}$$
Calculation of the above coefficients requires values at the interfaces of the control volumes. We calculate the interface values using linear interpolation. We solve these linear systems (90)–(91) iteratively for \(m_1\) and \(m_2\) with boundary conditions (64b)–(65b), using MATLAB’s inbuilt function mldivide, and therefore the coupled discrete elliptic equations can be solved very efficiently.
Literature
1.
go back to reference Adrien, B., Axel, B., Rolf, W., Jochen, S., Peter, L.: High resolution irradiance tailoring using multiple freeform surfaces. Opt. Express 21(9), 10563–10571 (2013)CrossRef Adrien, B., Axel, B., Rolf, W., Jochen, S., Peter, L.: High resolution irradiance tailoring using multiple freeform surfaces. Opt. Express 21(9), 10563–10571 (2013)CrossRef
2.
go back to reference Yadav, N.K., ten Thije Boonkkamp, J.H.M., IJzerman, W.L.: A least-squares method for the design of two-reflector optical system. J. Phys. Photonics (accepted) Yadav, N.K., ten Thije Boonkkamp, J.H.M., IJzerman, W.L.: A least-squares method for the design of two-reflector optical system. J. Phys. Photonics (accepted)
3.
go back to reference Brix, K., Hafizogullari, Y., Platen, A.: Designing illumination lenses and mirrors by the numerical solution of Monge–Ampère equations. J. Opt. Soc. Am. A 32(10), 803–837 (2015)MATH Brix, K., Hafizogullari, Y., Platen, A.: Designing illumination lenses and mirrors by the numerical solution of Monge–Ampère equations. J. Opt. Soc. Am. A 32(10), 803–837 (2015)MATH
4.
go back to reference Fang, F.Z., Zhang, X.D., Weckenmann, A., Zhang, G.X., Evans, C.: Manufacturing and measurement of freeform optics. CIRP Ann. Manuf. Technol. 62(2), 823–846 (2013)CrossRef Fang, F.Z., Zhang, X.D., Weckenmann, A., Zhang, G.X., Evans, C.: Manufacturing and measurement of freeform optics. CIRP Ann. Manuf. Technol. 62(2), 823–846 (2013)CrossRef
5.
go back to reference Chang, S., Wu, R., Zheng, Z.: Design beam shapers with double freeform surfaces to form a desired wavefront with prescribed illumination pattern by solving a Monge–Ampère type equation. J. Opt. 18, 125602 (2016)CrossRef Chang, S., Wu, R., Zheng, Z.: Design beam shapers with double freeform surfaces to form a desired wavefront with prescribed illumination pattern by solving a Monge–Ampère type equation. J. Opt. 18, 125602 (2016)CrossRef
6.
go back to reference Oliker, V.: Differential equations for design of a freeform single lens with prescribed irradiance properties. Opt. Eng. 53(3), 031302 (2013)CrossRef Oliker, V.: Differential equations for design of a freeform single lens with prescribed irradiance properties. Opt. Eng. 53(3), 031302 (2013)CrossRef
7.
go back to reference Prins, C.R., Beltman, R., ten Thije Boonkkamp, J.H.M., IJzerman, W.L., Tukker, T.W., : A least-squares method for optimal transport using the Monge–Ampère equation. SIAM J. Sci. Comput. 37(6), B937–B961 (2015) Prins, C.R., Beltman, R., ten Thije Boonkkamp, J.H.M., IJzerman, W.L., Tukker, T.W., : A least-squares method for optimal transport using the Monge–Ampère equation. SIAM J. Sci. Comput. 37(6), B937–B961 (2015)
8.
go back to reference Glimm, T., Oliker, V.: Optical design of two-reflector systems, the Monge–Kantorovich mass transfer problem and Fermat’s principle. Indiana Univ. Math. J. 53(5), 11070–11078 (2004)MathSciNetCrossRefMATH Glimm, T., Oliker, V.: Optical design of two-reflector systems, the Monge–Kantorovich mass transfer problem and Fermat’s principle. Indiana Univ. Math. J. 53(5), 11070–11078 (2004)MathSciNetCrossRefMATH
9.
go back to reference Oliker, V.: On design of free-form refractive beam shapers, sensitivity to figure error, and convexity of lenses. J. Opt. Soc. Am. A 25(12), 3067–3076 (2008)MathSciNetCrossRef Oliker, V.: On design of free-form refractive beam shapers, sensitivity to figure error, and convexity of lenses. J. Opt. Soc. Am. A 25(12), 3067–3076 (2008)MathSciNetCrossRef
10.
go back to reference Oliker, V.: Designing freeform lenses for intensity and phase control of coherent light with help from geometry and mass transport. Arch. Ration. Mech. Anal. 201(3), 1013–1045 (2011)MathSciNetCrossRefMATH Oliker, V.: Designing freeform lenses for intensity and phase control of coherent light with help from geometry and mass transport. Arch. Ration. Mech. Anal. 201(3), 1013–1045 (2011)MathSciNetCrossRefMATH
11.
go back to reference Froese, B.D.: A numerical method for the elliptic Monge–Ampère equation with transport boundary conditions. SIAM J. Sci. Comput. 34, A1432–A1459 (2012)CrossRefMATH Froese, B.D.: A numerical method for the elliptic Monge–Ampère equation with transport boundary conditions. SIAM J. Sci. Comput. 34, A1432–A1459 (2012)CrossRefMATH
12.
go back to reference Benamou, J.D., Froese, B.D., Oberman, A.M.: A viscosity solution approach to the Monge–Ampère formulation of the optimal transportation problem. arXiv:1208.4873 (2013) Benamou, J.D., Froese, B.D., Oberman, A.M.: A viscosity solution approach to the Monge–Ampère formulation of the optimal transportation problem. arXiv:​1208.​4873 (2013)
13.
go back to reference Benamou, J.D., Froese, B.D., Oberman, A.M.: Numerical solution of the optimal transportation problem using the Monge–Ampère equation. J. Comput. Phys. 260, 107–126 (2014)MathSciNetCrossRefMATH Benamou, J.D., Froese, B.D., Oberman, A.M.: Numerical solution of the optimal transportation problem using the Monge–Ampère equation. J. Comput. Phys. 260, 107–126 (2014)MathSciNetCrossRefMATH
14.
go back to reference Bösel, C., Gross, H.: Single freeform surface design for prescribed input wavefront and target irradiance. J. Opt. Soc. Am. A 34(9), 1490–1499 (2017)CrossRef Bösel, C., Gross, H.: Single freeform surface design for prescribed input wavefront and target irradiance. J. Opt. Soc. Am. A 34(9), 1490–1499 (2017)CrossRef
15.
go back to reference Wu, R., Xu, L., Liu, P., Zhang, Y., Zheng, Z., Li, H., Liu, X.: Freeform illumination design: a nonlinear boundary problem for the elliptic Monge–Ampère equation. Opt. Lett. 38(2), 229–231 (2013)CrossRef Wu, R., Xu, L., Liu, P., Zhang, Y., Zheng, Z., Li, H., Liu, X.: Freeform illumination design: a nonlinear boundary problem for the elliptic Monge–Ampère equation. Opt. Lett. 38(2), 229–231 (2013)CrossRef
16.
go back to reference Brix, K., Hafizogullari, Y., Platen, A.: Solving the Monge–Ampère equation for the inverse reflector problem. Math. Models Methods Appl. Sci. 25, 803–837 (2015)MathSciNetCrossRefMATH Brix, K., Hafizogullari, Y., Platen, A.: Solving the Monge–Ampère equation for the inverse reflector problem. Math. Models Methods Appl. Sci. 25, 803–837 (2015)MathSciNetCrossRefMATH
17.
go back to reference Glimm, T., Oliker, V.: Optical design of single-reflector systems and the Monge–Kantorovich mass transfer problem. J. Math. Sci. 117(3), 4096–4108 (2003)MathSciNetCrossRefMATH Glimm, T., Oliker, V.: Optical design of single-reflector systems and the Monge–Kantorovich mass transfer problem. J. Math. Sci. 117(3), 4096–4108 (2003)MathSciNetCrossRefMATH
18.
19.
go back to reference Evans, L.C.: Partial differential equations and Monge–Kantorovich mass transfer. In: Current Developments in Mathematics, Cambridge (1997), International Press, Boston, pp. 65–126 (1999) Evans, L.C.: Partial differential equations and Monge–Kantorovich mass transfer. In: Current Developments in Mathematics, Cambridge (1997), International Press, Boston, pp. 65–126 (1999)
20.
go back to reference Bouchittè, G., Buttazzo, G., Seppecher, P.: Shape optimization solutions via Monge–Kantorovich equation. C. R. Acad. Sci. Paris Ser. I Math. 324(10), 1185–1191 (1997)MathSciNetCrossRefMATH Bouchittè, G., Buttazzo, G., Seppecher, P.: Shape optimization solutions via Monge–Kantorovich equation. C. R. Acad. Sci. Paris Ser. I Math. 324(10), 1185–1191 (1997)MathSciNetCrossRefMATH
21.
go back to reference Bouchittè, G., Buttazzo, G.: Characterization of optimal shapes and masses through Monge–Kantorovich equation. J. Eur. Math. Soc. 3(2), 139–168 (2001)MathSciNetCrossRefMATH Bouchittè, G., Buttazzo, G.: Characterization of optimal shapes and masses through Monge–Kantorovich equation. J. Eur. Math. Soc. 3(2), 139–168 (2001)MathSciNetCrossRefMATH
22.
go back to reference Gangbo, W.: An Introduction to the Mass Transportation Theory and Its Applications. Lecture Notes: School of Mathematics Georgia Institute of Technology Atlanta, USA (2004) Gangbo, W.: An Introduction to the Mass Transportation Theory and Its Applications. Lecture Notes: School of Mathematics Georgia Institute of Technology Atlanta, USA (2004)
23.
go back to reference Benamou, J.D., Brenier, Y.: A computational fluid mechanics solution to the Monge–Kantorovich mass transfer problem. Numer. Math. 84(3), 375–393 (2000)MathSciNetCrossRefMATH Benamou, J.D., Brenier, Y.: A computational fluid mechanics solution to the Monge–Kantorovich mass transfer problem. Numer. Math. 84(3), 375–393 (2000)MathSciNetCrossRefMATH
24.
go back to reference Prins, C.R.: Inverse Methods for Illumination optics. Ph.d. thesis, Eindhoven University of Technology (2014) Prins, C.R.: Inverse Methods for Illumination optics. Ph.d. thesis, Eindhoven University of Technology (2014)
25.
go back to reference Glassner, A.S.: An Introduction to Ray Tracing. Academic Press Ltd, London (1991)MATH Glassner, A.S.: An Introduction to Ray Tracing. Academic Press Ltd, London (1991)MATH
26.
go back to reference Born, M., Wolf, E.: Principles of Optics, 5th edn. Pergamon Press, Oxford (1975) Born, M., Wolf, E.: Principles of Optics, 5th edn. Pergamon Press, Oxford (1975)
27.
go back to reference Villani, C.: Topics in Optimal Transportation. Graduate Studies in Mathematics, vol. 58. American Mathematical Society, Providence (2003)MATH Villani, C.: Topics in Optimal Transportation. Graduate Studies in Mathematics, vol. 58. American Mathematical Society, Providence (2003)MATH
28.
30.
go back to reference Caboussat, A., Glowinski, R., Sorensen, D.C.: A least-squares method for the numerical solution of the Dirichlet problem for the elliptic Monge–Ampère equation in dimension two, ESAIM. J. Control Optim. Calc. Var. 19(3), 780–810 (2013)CrossRefMATH Caboussat, A., Glowinski, R., Sorensen, D.C.: A least-squares method for the numerical solution of the Dirichlet problem for the elliptic Monge–Ampère equation in dimension two, ESAIM. J. Control Optim. Calc. Var. 19(3), 780–810 (2013)CrossRefMATH
31.
go back to reference Glowinski, R.: Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems. SIAM, Philadelphia (2015)CrossRefMATH Glowinski, R.: Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems. SIAM, Philadelphia (2015)CrossRefMATH
32.
go back to reference Caboussat, A., Glowinski, R., Gourzoulidis, D.: A least-squares/relaxation method for the numerical solution of the three-dimensional elliptic Monge–Ampère equation. J. Sci. Comput 77(1), 53–78 (2018)MathSciNetCrossRefMATH Caboussat, A., Glowinski, R., Gourzoulidis, D.: A least-squares/relaxation method for the numerical solution of the three-dimensional elliptic Monge–Ampère equation. J. Sci. Comput 77(1), 53–78 (2018)MathSciNetCrossRefMATH
33.
go back to reference Adams, R.A., Essex, C.: A Complete Course Calculus, 8th edn. Pearson, Toronto (2013) Adams, R.A., Essex, C.: A Complete Course Calculus, 8th edn. Pearson, Toronto (2013)
34.
go back to reference Mesterton-Gibbons, M.: A Primer on the Calculus of Variations and Optimal Control Theory, 1st edn. American Mathematical Society, Providence (2009)MATH Mesterton-Gibbons, M.: A Primer on the Calculus of Variations and Optimal Control Theory, 1st edn. American Mathematical Society, Providence (2009)MATH
35.
go back to reference Mattheij, R.M.M., Rienstra, S.W., ten Thije Boonkkamp, J.H.M.: Partial Differential Equations. Society for Industrial and Applied Mathematics, Philadelphia (2005)CrossRefMATH Mattheij, R.M.M., Rienstra, S.W., ten Thije Boonkkamp, J.H.M.: Partial Differential Equations. Society for Industrial and Applied Mathematics, Philadelphia (2005)CrossRefMATH
36.
go back to reference Tignol, J.: Galois’s Theory of Algebraic Equations. Longman Scientific and Technical, Harlow (1988) Tignol, J.: Galois’s Theory of Algebraic Equations. Longman Scientific and Technical, Harlow (1988)
Metadata
Title
A Monge–Ampère Problem with Non-quadratic Cost Function to Compute Freeform Lens Surfaces
Authors
N. K. Yadav
J. H. M. ten Thije Boonkkamp
W. L. IJzerman
Publication date
27-03-2019
Publisher
Springer US
Published in
Journal of Scientific Computing / Issue 1/2019
Print ISSN: 0885-7474
Electronic ISSN: 1573-7691
DOI
https://doi.org/10.1007/s10915-019-00948-9

Other articles of this Issue 1/2019

Journal of Scientific Computing 1/2019 Go to the issue

Premium Partner