We approach this task in a modular and abstract fashion. We first introduce the necessary combinatorical objects and results. We then review the definitions and behaviour of linear and multilinear maps as well as powers, polynomials and formal power series between (possibly infinite-dimensional) vector spaces. Following this, their corresponding continuous versions between Banach spaces are considered. With this we also review the definitions and behaviour of Peano and (higher-order) Fréchet differentiability, as well as analyticity and Gevrey smoothness. Especially, we prove various qualitative and quantitative results concerning both the composition of maps and implicit map theorems for polynomials and power series, as well as Peano differentiable, Fréchet differentiable, analytic or Gevrey smooth maps. These mathematical tools enable our analysis of the class of semilinear elliptic partial differential equations.

In this Thesis, we propose a nonlinear iterative optimization method to solve inverse medium problems. Instead of using a grid based optimization approach, which leads to challenging large scale problems, we iteratively minimize the data misfit within a small finite dimensional subspace spanned by the first few eigenfunctions of a carefully chosen elliptic operator. As the operator depends on the minimizer in the previous search space, so do its eigenfunctions, and consequently the subsequent search space. This approach allows us to incorporate regularization inherently at each iteration without the need for additional penalization, such as Total Variation or Tikhonov regularization.

By introducing a key angle condition, we can prove the convergence of the resulting Adaptive Spectral Inversion (ASI) method and demonstrate its regularizing effect. Through numerical experiments, we illustrate the remarkable accuracy of the ASI, even detecting the smallest inclusions where previous methods failed. Furthermore, we demonstrate that the ASI performs favorably compared to standard grid based inversion using Tikhonov regularization when applied to an elliptic inverse problem.

The choice of the elliptic operator for obtaining the subsequent search space is crucial for achieving accurate reconstructed media. For known piecewise constant media, consisting of a few interior inclusions, we prove that the first few eigenfunctions of the operator, that depend on the medium, effectively approximate the medium and its discontinuities. Then, we validate these analytically proven properties of the operator through various numerical experiments.

Local time-stepping (LTS) methods allow to overcome these constraints by applying different time integration strategies in different parts of the domain. More precisely, Diaz and Grote introduced a LF-based LTS approach [SIAM J. Sci. Comput. 31 (2009), pp. 1985--2014] that is fully explicit, second-order accurate, progresses with an there-term recurrence relation and conserves a discrete energy. Recently, Grote, Mehlin, and Sauter proved optimal convergence rates for this method coupled with a standard Galerkin FE discretization, however under a CFL condition which, in fact, depends on the size of the smallest elements [SIAM J. Numer. Anal. 56 (2018), pp. 994--1021].

Here, we slightly adjust the original LF-LTS (as done recently for LF-Chebyshev methods by Carle, Hochbruck, and Sturm [SIAM J. Numer. Anal. 58 (2020), pp. 2404--2433]) to remove certain discrete values of $\Delta t$, where otherwise instabilities can occur, while nonetheless maintaining all important properties. For this new stabilized LF-LTS method, we prove optimal convergence rates under a CFL condition independent of the mesh size inside the locally refined region. Numerical experiments illustrate these results and verify that the method also preserves the optimal rates on meshes suitably graded towards a reentrant corner.

The effectiveness of these LF-LTS methods is displayed further in an application to uncertainty quantification. A very popular approach to estimate statistics of quantities of interest are the robust and non-intrusive multilevel Monte Carlo methods (MLMC), which efficiently distribute computation of samples across a hierarchy of discretizations. However, inside a given domain, some elements may be forced to remain small across all levels, thereby, for time-dependent problems, inducing a tiny time-step on every level, if explicit time-stepping is used. By adapting the time-step to the locally refined elements on each level, LTS methods permit to restore the efficiency of MLMC methods even in the presence of complex geometry without sacrificing the explicitness and inherent parallelism.

schemes of high accuracy based either on classical or low-

storage Runge-Kutta schemes for time dependent Maxwell’s

equations. By using smaller time steps precisely where

smaller elements in the mesh are located, these methods

overcome the bottleneck caused by local mesh refinement

in explicit time integrators.

To infer the characteristics of the medium from (boundary) measurements,

for instance, one typically formulates inverse scattering problems

in frequency domain as a PDE-constrained optimization problem.

Finding the medium, where the simulated wave field

matches the measured (real) wave field, the inverse problem

requires the repeated solutions of forward (Helmholtz) problems.

Typically, standard numerical methods, e.g. direct solvers or iterative methods,

are used to solve the forward problem.

However, large-scaled (or high-frequent) scattering problems

are known being competitive in computation and storage for standard methods.

Moreover, since the optimization problem is severely ill-posed

and has a large number of

local minima, the inverse problem requires additional regularization

akin to minimizing the total variation.

Finding a suitable regularization for the inverse problem is critical

to tackle the ill-posedness and to reduce the computational cost and storage requirement.

In my thesis, we first apply standard methods to forward problems.

Then, we consider the controllability method (CM)

for solving the forward problem: it

instead reformulates the problem in the time domain

and seeks the time-harmonic solution of the corresponding wave equation.

By iteratively reducing the mismatch between the solution at

initial time and after one period with the conjugate gradient (CG) method,

the CMCG method greatly speeds up the convergence to the time-harmonic

asymptotic limit. Moreover, each conjugate gradient iteration

solely relies on standard numerical algorithms,

which are inherently parallel and robust against higher frequencies.

Based on the original CM, introduced in 1994 by Bristeau et al.,

for sound-soft scattering problems, we extend the CMCG method to

general boundary-value problems governed by the Helmholtz equation.

Numerical results not only show the usefulness, robustness, and efficiency

of the CMCG method for solving the forward problem,

but also demonstrate remarkably accurate solutions.

Second, we formulate the PDE-constrained optimization

problem governed by the inverse scattering problem

to reconstruct the unknown medium.

Instead of a grid-based discrete representation combined with

standard Tikhonov-type regularization, the unknown medium is

projected to a small finite-dimensional subspace,

which is iteratively adapted using dynamic thresholding.

The adaptive (spectral) space is governed by solving

several Poisson-type eigenvalue problems.

To tackle the ill-posedness that the Newton-type optimization

method converges to a false local minimum,

we combine the adaptive spectral inversion (ASI) method with the frequency stepping strategy.

Numerical examples illustrate the usefulness of the ASI approach,

which not only efficiently and remarkably reduces the dimension of the

solution space, but also yields an accurate and robust method.

Read More: http://epubs.siam.org/doi/abs/10.1137/12087030X

norm and convergence to the homogenized solution are proved, when both the macro and the micro scales are refined simultaneously. Numerical experiments corroborate the theoretical convergence rates and illustrate the behavior of the numerical method for periodic and heterogeneous media.

exploration and medical imaging. The goal is to recover unknown media using wave prop-

agation. The inverse problem is designed to minimize simulated data with observation

data, using partial differential equations (PDE) as constrains. The resulting minimiza-

tion problem is often severely ill-posed and contains a large number of local minima. To

tackle ill-posedness, several optimization and regularization techniques have been explored.

However, the applications are still asking for improvement and stability.

In this thesis, a nonlinear optimization method is proposed for the solution of inverse

scattering problems in the frequency domain, when the scattered field is governed by

the Helmholtz equation. The time-harmonic inverse medium problem is formulated as

a PDE-constrained optimization problem and solved by an inexact truncated Newton-

type method. Instead of a grid-based discrete representation, the unknown wave speed

is projected to a particular finite-dimensional basis, which is iteratively adapted during

the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly

increasing) finite number of eigenfunctions effectively introduces regularization into the

inversion and thus avoids the need for standard Tikhonov-type regularization. We actually

show how to build an AE from the gradients of Tikhonov-regularization functionals.

Both analytical and numerical evidence underpin the accuracy of the AE representation.

Numerical experiments demonstrate the efficiency and robustness to missing or noisy data

of the resulting adaptive eigenspace inversion (AEI) method. We also consider missing

frequency data and apply the AEI to the multi-parameter inverse scattering problem.

Model problems describing wave propagation include the wave equation and Maxwell's equations, which we study in this work. Both models are partial differential equations in space and time. Following the method-of-lines approach we first discretize the two model problems in space using finite element methods (FEM) in their continuous or discontinuous form. FEM are increasingly popular in the presence of heterogeneous media or complex geometry due to their inherent flexibility: elements can be small precisely where small features are located, and larger elsewhere. Such a local mesh refinement, however, also imposes severe stability constraints on explicit time integration, as the maximal time-step is dictated by the smallest elements in the mesh. When mesh refinement is restricted to a small region, the use of implicit methods, or a very small time-step in the entire computational domain, are generally too high a price to pay.

Local time-stepping (LTS) methods alleviate that geometry induced stability restriction by dividing the elements into two distinct regions: the "coarse region" which contains the larger elements and is integrated in time using an explicit method, and the "fine region" which contains the smaller elements and is integrated in time using either smaller time-steps or an implicit scheme.

Here we first present LTS schemes based on explicit Runge-Kutta (RK) methods. Starting from classical or low-storage explicit RK methods, we derive explicit LTS methods of arbitrarily high accuracy.

We prove that the LTS-RKs(p) methods yield the same rate of convergence as the underlying RKs scheme. Numerical experiments with continuous and discontinuous Galerkin finite element discretizations corroborate the expected rates of convergence and illustrate the usefulness of these LTS-RK methods.

As a second method we propose local exponential Adams-Bashforth (LexpAB) schemes. Unlike LTS schemes, LexpAB methods overcome the severe stability restrictions caused by local mesh refinement not by integrating with a smaller time-step but by using the exact matrix exponential in the fine region. Thus, they present an interesting alternative to the LTS schemes. Numerical experiments in 1D and 2D confirm the expected order of convergence and demonstrate the versatility of the approach in cases of extreme refinement.

In this thesis interior-point (IP) methods are considered to solve nonconvex large-scale PDE-constrained optimization problems with inequality constraints. To cope with enormous fill-in of direct linear solvers, inexact search directions are allowed in an inexact interior-point (IIP) method. This thesis builds upon the IIP method proposed in [Curtis, Schenk, Wächter, SIAM Journal on Scientific Computing, 2010]. SMART tests cope with the lack of inertia information to control Hessian modification and also specify termination tests for the iterative linear solver.

The original IIP method needs to solve two sparse large-scale linear systems in each optimization step. This is improved to only a single linear system solution in most optimization steps. Within this improved IIP framework, two iterative linear solvers are evaluated: A general purpose algebraic multilevel incomplete L D L^T preconditioned SQMR method is applied to PDE-constrained optimization problems for optimal server room cooling in three space dimensions and to compute an ambient temperature for optimal cooling. The results show robustness and efficiency of the IIP method when compared with the exact IP method.

These advantages are even more evident for a reduced-space preconditioned (RSP) GMRES solver which takes advantage of the linear system's structure. This RSP-IIP method is studied on the basis of distributed and boundary control problems originating from superconductivity and from two-dimensional and three-dimensional parameter estimation problems in groundwater modeling. The numerical results exhibit the improved efficiency especially for multiple PDE constraints.

An inverse medium problem for the Helmholtz equation with pointwise box constraints is solved by IP methods. The ill-posedness of the problem is explored numerically and different regularization strategies are compared. The impact of box constraints and the importance of Hessian modification on the optimization algorithm is demonstrated. A real world seismic imaging problem is solved successfully by the RSP-IIP method.

It can be solved by standard numerical methods such as, e.g., the finite element (FE) or the finite difference method. However, if the wave propagation speed varies on a microscopic length scale denoted by epsilon, the computational cost becomes infeasible, since the medium must be resolved down to its finest scale. In this thesis we propose multiscale numerical methods which approximate the overall macroscopic behavior of the wave propagation with a substantially lower computational effort. We follow the design principles of the heterogeneous multiscale method (HMM), introduced in 2003 by E and Engquist. This method relies on a coarse discretization of an a priori unknown effective equation. The missing data, usually the parameters of the effective equation, are estimated on demand by solving microscale problems on small sampling domains. Hence, no precomputation of these effective parameters is needed. We choose FE methods to solve both the macroscopic and the microscopic problems.

For limited time the overall behavior of the wave is well described by the homogenized wave equation. We prove that the FE-HMM method converges to the solution of the homogenized wave equation. With increasing time, however, the true solution deviates from the classical homogenization limit, as a large secondary wave train develops. Neither the homogenized solution, nor the FE-HMM capture these dispersive effects. To capture them we need to modify the FE-HMM. Inspired by higher order homogenization techniques we additionally compute a correction term of order epsilon^2. Since its computation also relies on the solution of the same microscale problems as the original FE-HMM, the computational effort remains essentially unchanged. For this modified version we also prove convergence to the homogenized wave equation, but in contrast to the original FE-HMM the long-time dispersive behavior is recovered.

The convergence proofs for the FE-HMM follow from new Strang-type results for the wave equation. The results are general enough such that the FE-HMM with and without the long-time correction fits into the setting, even if numerical quadrature is used to evaluate the arising L^2 inner product.

In addition to these results we give alternative formulations of the FE-HMM, where the elliptic micro problems are replaced by hyperbolic ones. All the results are supported by numerical tests. The versatility of the method is demonstrated by various numerical examples.

cultures were analyzed to investigate how densely packed cells organize.

A mathematical model was introduced in Edelstein-Keshet and Ermentrout (1990)

to prove that the pattern formation can be caused by the mere interactions of individual cells, although it is a

population phenomena. Until then the formation of structures was only attributed to other mechanisms as chemical gradients (chemotaxis)

or mechanical stresses. In this regard we refer in particular to Oster where a mathematical analysis was proposed

to understand how these mechanisms conspire to generate organized spatial aggregations. In Edelstein-Keshet and Ermentrout (1990)

indeed the authors showed that the self-organization of cells can actually be explained from contact-responses of the cells alone.

Their integro-differential equations considered the distribution of the cells as a variable of the time and the angle of orientation.

They presented two equations, one for cells that are bounded and one for free cells. Furthermore Mogilner extended the model to also take into account

the spatial distribution of the cells.

We could not find in the literature any similar works applied to the cells

considered in this dissertation. Because of the similarities between these cells and the fibroblasts we decided to start from this last model.

We worked in tight collaboration with the Tissue Engineering Group (TEG) at University Hospital in Basel.

Cartilage tissue engineering is a novel and promising approach to repair articular cartilage defects. This procedure requires that

cartilage cells (chondrocytes) are isolated from a small biopsy and expanded in vitro, generally on two-dimensional culture plates (monolayer),

to augment their original number. Post expanded cells are then cultured on specific biosynthetic materials and grafted in the

cartilage defects. One of the challenges that arise in this procedure is that the chondrocytes undergo only a limited number of divisions in vitro.

A possible way to overcome this limit consists in the supplantation of specific bioactive molecules (growth factors)

during the culture of chondrocytes. To investigate how these growth factors influence the cell expansion we were asked to seek an

appropriate mathematical model.

In a first step, we developed a model combining time-lag (delay) and logistic equations to capture the kinetic

parameters and to enable the description of the complete growth process of the cell culture.

However, this model only describes how the number of cells changes in time, without considering the spatial evolution of the cells

on a two-dimensional substrate. In previous experiments we observed that chondrocytes cultured with growth factors change not only their shapes, but also their main characteristics,

being then very similar to fibroblasts . This suggested that we

start from the model developed by Mogilner which, however, does not consider the cell duplication. We extended

this model in an innovative way, adding a logistic terms to follow the cell dynamics during the entire culture time. In particular, we used this model

to analyze the formation of patterns at confluence. Indeed it was observed in experiments that when the density of the cells reaches a critical

level there is a spontaneous tendency to align along some common axis of orientation. The selection of a preferred axis of

orientation can be explained by the fact that the uniform steady state

(one in which cells are uniformly distributed in orientation and space) could be unstable under particular conditions.

We used linear stability theory to test for the presence of such instability. Indeed, bifurcations can lead to loss of

stability of a uniform steady state in favor of patterned states, where cells are aligned in parallel arrays or aggregated

in clusters. We remark that we always tried not to loose

the link with the biological context by discussing constantly our results with the TEG. In particular, for the comparison

them with biological experiments it was essential to use sophisticated image analysis tools which also permit to analyze the orientation

of the cells.

One of the main problems the TEG is confronting consists in the variability of the behavior of chondrocytes isolated from different donors. In a study performed to

investigate age related changes in proliferation and post-expansion tissue-forming capacity \cite{Barbero04} an extreme variability in

these properties was unexpectedly observed among chondrocytes derived from donors within the same age range.

In this regard, the model we present could help biologists either in

defining conditions that improve chondrocyte properties or in identifying donor cells that have adequate characteristics for clinical application.

thus require an artificial boundary B, which truncates the unbounded exterior

domain and restricts the region of interest to a finite computational

domain,

. It then becomes necessary to impose a boundary condition at

B, which ensures that the solution in

coincides with the restriction to

of the solution in the unbounded region. If we exhibit a boundary condition,

such that the fictitious boundary appears perfectly transparent, we shall call

it exact. Otherwise it will correspond to an approximate boundary condition

and generate some spurious reflection, which travels back and spoils the

solution everywhere in the computational domain. In addition to the transparency

property, we require the computational effort involved with such a

boundary condition to be comparable to that of the numerical method used

in the interior. Otherwise the boundary condition will quickly be dismissed

as prohibitively expensive and impractical. The constant demand for increasingly

accurate, efficient, and robust numerical methods, which can handle a

wide variety of physical phenomena, spurs the search for improvements in

artificial boundary conditions.

In the last decade, the perfectly matched layer (PML) approach [16] has

proved a flexible and accurate method for the simulation of waves in unbounded

media. Standard PML formulations, however, usually require wave

equations stated in their standard second-order form to be reformulated as

first-order systems, thereby introducing many additional unknowns. To circumvent

this cumbersome and somewhat expensive step we propose instead

a simple PML formulation directly for the wave equation in its second-order

form. Our formulation requires fewer auxiliary unknowns than previous formulations

[23, 94].

Starting from a high-order local nonreflecting boundary condition (NRBC)

for single scattering [55], we derive a local NRBC for time-dependent multiple

scattering problems, which is completely local both in space and time. To do so, we first develop a high order exterior evaluation formula for a purely

outgoing wave field, given its values and those of certain auxiliary functions needed for the local NRBC on the artificial boundary. By combining that

evaluation formula with the decomposition of the total scattered field into

purely outgoing contributions, we obtain the first exact, completely local,

NRBC for time-dependent multiple scattering. Remarkably, the information

transfer (of time retarded values) between sub-domains will only occur

across those parts of the artificial boundary, where outgoing rays intersect

neighboring sub-domains, i.e. typically only across a fraction of the artificial

boundary. The accuracy, stability and efficiency of this new local NRBC is

evaluated by coupling it to standard finite element or finite difference methods.

order electromagnetic and acoustic wave equation by the interior penalty (IP)

discontinuous Galerkin (DG) finite element method (FEM). In Part I we focus

on time-harmonic Maxwell source problems in the high-frequency regime. Part

II is devoted to the study of the IP DG FEM for time-dependent acoustic and

electromagnetic wave equations.

We begin by stating Maxwell's equations in time and frequency domain. We

proceed by a variational formulation of Maxwell's equations, and describe the

key challenges that are faced in the analysis of the Maxwell operator. Then,

we review conforming finite element methods to discretize the second order

Maxwell operator. We end this general introduction with some numerical results

to highlight the performance and feasibility of conforming FEM for Maxwell's

equations.

Chapter 2: In this chapter, we introduce and analyze the interior penalty discontinuous

Galerkin method for the numerical discretization of the indefinite time-harmonic

Maxwell equations in high-frequency regime. Based on suitable duality arguments,

we derive a-priori error bounds in the energy norm and the L2-norm. In

particular, the error in the energy norm is shown to converge with the optimal

order O(hminfs;`g) with respect to the mesh size h, the polynomial degree `, and

the regularity exponent s of the analytical solution. Under additional regularity

assumptions, the L2-error is shown to converge with the optimal order O(h`+1).

The theoretical results are confirmed in a series of numerical experiments on

triangular meshes.

The thesis' author's principal contributions are the proof of the L2-error

bound in Section 2.6, and the proof of Lemma 2.4.1.

Chapter 3: We present and analyze an interior penalty method for the numerical discretization

of the indefinite time-harmonic Maxwell equations in mixed form. The

method is based on the mixed discretization of the curl-curl operator developed

in [44] and can be understood as a non-stabilized variant of the approach

proposed in [63]. We show the well-posedness of this approach and derive optimal

a-priori error estimates in the energy-norm as well as the L2-norm. The

theoretical results are confirmed in a series of numerical experiments.

The thesis' author's principal contribution is the proof of the L2-error bound

in Section 3.6.

Chapter 4: The symmetric interior penalty discontinuous Galerkin finite element method

is presented for the numerical discretization of the second-order scalar wave

equation. The resulting stiffness matrix is symmetric positive definite and the

mass matrix is essentially diagonal; hence, the method is inherently parallel

and, leads to fully explicit time integration when coupled with an explicit timestepping

scheme. Optimal a priori error bounds are derived in the energy norm

and the L2-norm for the semi-discrete formulation. In particular, the error

in the energy norm is shown to converge with the optimal order O(hminfs;`g)

with respect to the mesh size h, the polynomial degree `, and the regularity

exponent s of the continuous solution. Under additional regularity assumptions,

the L2-error is shown to converge with the optimal order O(h`+1). Numerical

results confirm the expected convergence rates and illustrate the versatility of

the method.

Chapter 5: We develop the symmetric interior penalty discontinuous Galerkin (DG) method

for the spatial discretization in the method of lines approach of the timedependent

Maxwell equations in second-order form. We derive optimal a-priori

estimates for the semi-discrete error in the energy norm. For smooth solutions,

these estimates hold for DG discretizations on general finite element meshes.

For low-regularity solutions that have singularities in space, the theoretical estimates

hold on conforming, affine meshes. Moreover, on conforming triangular

meshes, we derive optimal error estimates in the L2-norm. Finally, we valuate

our theoretical results by a series of numerical experiments.