Research Group of Prof. Dr. Ira Neitzel
Research Seminar: Mathematics of Computation
Prof. Carsten Burstedde and Prof. Ira Neitzel (previously with Prof. Markus Bachmayr, Prof. Michael Feischl, Dr. Tino Ullrich, and Prof. André Uschmajew)
Upcoming talks
Thursday, 23.05.2019 13.00 (s.t), EA19b 2.035, Alexander Zimmerman (RWTH Aachen)
Automatic implementation and verification of accurate and efficient finite element simulations using UFL and Firedrake
Often when developing simulations of physical systems, application researchers spend much of their time writing and debugging code which implements longestablished general methods. Rather than reinventing the wheel, many researchers learn to use BLAS for matrix multiplication, LAPACK for dense linear algebra, and PETSc for solving sparse linear and nonlinear systems often based on unstructured geometric meshes. These software projects have been tremendously successful. Many other important methods in computational science and engineering have yet to be made so accessible.
This talk explores a ubiquitous technology which is not sufficiently accessible: finite element methods (FEM). Many good FEM libraries (e.g. deal.II, Dune, Moose, FEniCS, Firedrake) exist; but they all fill specific niches, provide specific features, and require substantial user expertise. Underpinning FEniCS and Firedrake is perhaps the most promising finite element software technology, the Unified Form Language (UFL). The key idea of UFL is that, by fully exploiting the mathematical rigor of FEM, a concise and abstract domain specific language can be provided to users, and a library can automatically write and compile the optimized lowlevel code. I will demonstrate UFL by applying Firedrake to the multiphysics problem of convectioncoupled phasechange. Furthermore, I will show how I used UFL to automate the previously daunting task of formally verifying FEM codes via the method of manufactured solutions.
Talks in the past
Thursday, 13.12.2018 14.30 (s.t), EA19b 2.041, Herman Haverkort (Bonn)
Spacefilling curve properties for mesh traversals
Spacefilling curves based on recursive tessellations can be used to define the order in which to process elements of computational meshes, the order in which to layout mesh elements in memory, or the order in which to distribute them over different processors: we simply use the order in which the mesh elements appear along the curve. In this talk I will present definitions of properties of spacefilling curves that may influence parameters such as the amount of interprocessor communication or the number of page faults incurred in accessing mesh elements. In particular:

Stackandstream methods can be very efficient with memory access through caches, but they require spacefilling curves that have specific symmetry properties (palindromicity). I will explain what we know about the existence of such curves in 2D and 3D, for cubic and tetrahedral tessellations, and what open questions remain.

In various ways, memory access and communication efficiency may benefit from curves of which every subsection (i.e. set of points that are consecutive along the curve) is relatively wellshaped, for example having small diameter and surface area compared to its volume. Not all spacefilling curves are equally good in this respect. I will explain what we know about what can be achieved (lower and upper bounds) according to various quality measures.
Thursday, 29.11.2018 15.00 (s.t), EA19b 2.035/2.041, Dr. Markus Faustmann (TU Wien) (Joint work with Jens Markus Melenk and Dirk Praetorius)
Optimal adaptivity for the fractional Laplacian
We study the operator $(\Delta)^s$ for $s \in (0,1)$, the so called (integral) fractional Laplacian, defined as the principal value integral $(\Delta)^su(x) := C(d,s) \; \text{P.V.} \int_{\mathbb{R}^d}\frac{u(x)u(y)}{xy^{d+2s}}dy.$ In this talk, we present novel inverse estimates for the fractional Laplacian. More precisely, we show that a weighted $L^2$norm, where the weight is a power of the local meshwidth, of the fractional Laplacian can be bounded by the energy norm. Generalizing the arguments used in the boundary element method, [1], the nonlocal integraloperator is split into a localized nearfield and a smoother farfield part, which is treated using the socalled CaffarelliSilvestre extension problem and interior regularity estimates.
Weighted $L^2$norms appear naturally in the context of aposteriori error estimation in adaptive methods. With the help of our inverse estimate, we prove optimal convergence rates of an adaptive algorithm steered by a weighted residual error estimator using the axiomatic approach of [2]. Moreover, we propose a different, reliable weighted error estimator to cover the open case of fractional powers larger than $3/4$.
References:
[1] Aurada, M., Feischl, M., Führer, T., Karkulik, M, Melenk, J.M., and Praetorius, D. Local inverse estimates for nonlocal boundary integral operators. Mathematics of Computation, 86(308):26512686, 2017.
[2] Carstensen, C., Feischl, M., Page, M. and Praetorius, D. Axioms of adaptivity. Computers & Mathematics with Applications, 67(6):11951253, 2014.
Thursday, 25.10.2018 13:30 (s.t), EA19b 2.035, Paula Castro (MODEMAT, Escuela Politécnica Nacional, Quito) (Joint work with Juan Carlos De los Reyes, Escuela Politécnica Nacional, Quito)
A bilevel learning approach for optimal observation placement in variational data assimilation
In this paper we propose a bilevel optimization approach for the placement of observations in variational data assimilation problems. Within the frame work of supervised learning, we consider a bilevel problem where the lower level task is the variational reconstruction of the initial condition of the system, and the upper level problem solves the optimal placement with help of a sparsity inducing norm. Due to the pointwise nature of the observations, an optimality system with regular Borel measures on the righthand side is obtained as necessary and sufficient optimality condition for the lower level problem. The latter is then considered as constraint for the upper level instance, yielding an optimization problem constrained by timedependent PDE’s with measures. After proving some extra regularity results, we demonstrate the existence of Lagrange multipliers and derive a necessary optimality system characterizing the optimal solution of the bilevel problem. The numerical solution is carried out also on two levels. The lower level problem is solved using a standard BFGS method, while the upper level one is solved by means of a projected BFGS algorithm based on the estimation of εactive sets. A penalty function is also considered for enhancing sparsity of the location weights. Finally some numerical experiments are presented to illustrate the main features of our approach.
Thursday, 13.09.2018 14:00 (s.t), EA19b 2.035, Johannes Pfefferer (TU München)
hpFinite Elements for Fractional Diffusion
Thursday, 21.06.2018 13:00 (s.t), We6 6.020, Kathrin Smetana (Universiteit Twente)
Randomized model order reduction
During the last decades (numerical) simulations based on partial differential equations have considerably gained importance in engineering applications, life sciences, environmental issues, and finance. However, especially when multiple simulation requests or a realtime simulation response are desired, standard methods such as finite elements are prohibitive. Model order reduction approaches have been developed to tackle such situations. Here, the key concept is to prepare a problemadapted lowdimensional subspace of the highdimensional discretization space in a possibly expensive offline stage to realize a fast simulation response by Galerkin projection on that lowdimensional space in the subsequent online stage.
In this talk we show how randomization as used say in randomized linear algebra or compressed sensing can be exploited both for constructing reduced order models and deriving bounds for the approximation error. We also demonstrate those techniques for the generation of local reduced approximation spaces that can be used within domain decomposition or multiscale methods.
Thursday, 26.04.2018 13:00 (s.t), We6 6.020, Matthias Schlottbom (Universiteit Twente) (Joint work with Herbert Egger, TU Darmstadt)
A perfectly matched layer approach for radiative transfer in highly scattering regimes
We consider the numerical approximation of boundary conditions in radiative transfer problems by a perfectly matched layer approach. The main idea is to extend the computational domain by an absorbing layer and to use an appropriate reflection boundary condition at the boundary of the extended domain. A careful analysis shows that the consistency error introduced by this approach can be made arbitrarily small by increasing the size of the extension domain or the magnitude of the artificial absorption in the surrounding layer. A particular choice of the reflection boundary condition allows us to circumvent the halfspace integrals that arise in the variational treatment of the original vacuum boundary conditions and which destroy the sparse coupling observed in numerical approximation schemes based on truncated spherical harmonics expansions. A combination of the perfectly matched layer approach with a mixed variational formulation and a PNfinite element approximation leads to discretization schemes with optimal sparsity pattern and provable quasioptimal convergence properties. As demonstrated in numerical tests these methods are accurate and very efficient for radiative transfer in the scattering regime.
Thursday, 07.12.2107 13:00 (s.t), We6 6.020, Kristin Kirchner (Chalmers)
Variational Methods for Moments of Solutions to Stochastic Differential Equations
We consider parabolic stochastic partial differential equations with multiplicative Wiener noise. For the second moment of the mild solution, a deterministic spacetime variational problem is derived. Wellposedness is proven on projective and injective tensor product spaces as trial and test spaces. From these results, a deterministic equation for the covariance function is deduced. These deterministic equations in variational form are used to derive numerical methods for approximating the second moment of solutions to stochastic ordinary and partial differential equations. For the canonical example of a stochastic ordinary differential equation with multiplicative noise, the geometric Brownian motion, we introduce and analyze PetrovGalerkin discretizations based on tensor product piecewise polynomials. For approximating the second moment of solutions to stochastic partial differential equations, we then propose conforming spacetime PetrovGalerkin discretizations. In both cases, we derive stability bounds in the natural tensor product spaces. Numerical experiments validate the theoretical results.
Thursday, 14.12.2107 10:15 (s.t), We6 6.020, Peter Oswald (Bonn) (Joint work with M. Griebel, Bonn)
Schwarz iterative methods and RKHS theory
Schwarz iterative methods in Hilbert spaces are a recurring theme in my collaboration with M. Griebel. Lately, we became interested in greedy and stochastic versions of this class of iterative methods, where a variational problem in an infinitedimensional Hilbert space $H$ is decomposed into infinitely many auxiliary subproblems. Convergence of a particular instance of the Schwarz iteration (called relaxed greedy and averaged stochastic descent, respectively) can be proved, algebraic rates of convergence can be obtained for large subsets of $H$ (this is in contrast to finite decompositions, where rates are exponential in the number of iteration steps). When working on the stochastic versions, we realized a certain connection to the theory of reproducing kernel Hilbert spaces and online learning algorithms. It turns out that the iterative solution of variational problems in a Hilbert space via subproblem solves based on space splittings can always be viewed as online learning of a variable Hilbert space valued function using kernel methods. Even though this connection does not help perfoming the algorithm, it sheds new light on the convergence proofs. The introduction of RKHS of variable Hilbert space valued functions may have some value for statistical learning as well, this hope is, however, not yet substantiated.
Thursday, 30.11.2017 13:00 (s.t), We6 6.020, Virginie Ehrlacher (École des Ponts ParisTech) (joint work with Tony Lelièvre and Pierre Monmarché)
Tensorized ABF method for molecular dynamics
The Adaptive Biasing Force (ABF) method is one of the most popular method in molecular dynamics simulation to avoid problems linked to the metastability of Langevin processes used to compute macroscopic thermodynamic quantities. The method relies in the use of coarsegrained descriptors of the atomic system, called reaction coordinates, and in the computation of an appropriate function depending on these reaction coordinates, called free energy. However, when the numner of relevant reaction coordinates is large, the standard ABF method suffers from the wellknown curse of dimensionality. The aim of this talk is to present theoretical results on the convergence of a tensor method to circumvent this difficulty and enable the use of ABF methods in cases when the number of reaction coordinates is significant.
Thursday, 23.11.2017 13:00 (s.t), We6 6.020, Felix Voigtlaender (TU Berlin)
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
Recently, machine learning techniques based on deep neural networks have significantly improved the state of the art in tasks like image classification and speech recognition. Nevertheless, a solid theoretical explanation of this success story is still missing. In this talk, we will present recent results concerning the approximation theoretic properties of deep neural networks which help to explain some of the characteristics of such networks; in particular we will see that deeper networks can approximate certain classification functions much more efficiently than shallow networks. We emphasize though that these approximation theoretic properties do not explain why simple algorithms like stochastic gradient descent work so well in practice, or why deep neural networks tend to generalize so well; we purely focus on the expressive power of such networks. Precisely, as a model class for classifier functions we consider the class of (possibly discontinuous) piecewise smooth functions for which the different “smooth regions” are separated by smooth hypersurfaces. Given such a function, and a desired approximation accuracy, we construct a neural network which achieves the desired approximation accuracy, where the error is measured in $L^2$. We give precise bounds on the required size (in terms of the number of weights) and depth of the network, depending on the approximation accuracy, on the smoothness parameters of the given function, and on the dimension of its domain of definition. Finally, we show that this size of the networks is optimal, and that networks of smaller depth would need significantly more weights than the deep networks that we construct, in order to achieve the desired approximation accuracy.
Thursday, 16.11.2017 10:15 (s.t), We6 6.020, Thomas Hangelbroek
Nonlinear and anisotropic approximation with Gaussians
For approximation with most positive definite kernels – perhaps most obviously for Gaussian radial basis functions – the selection of a scale (by tuning of shape parameters) is challenging, yet exigent in the following sense: for constant scale approximation, high approximation rates are possible, but at the cost of conditioning, while for stationary scaling (fixing the scale to fit a grid) the approximation problem can be made very stable (e.g., with interpolation matrices nearing the identity) but approximation power is lost. For this reason, multiscale strategies, which use kernels at different dilation levels, have been considered. But this aspect of kernel approximation is relatively new, and its theoretical understanding lags far behind traditional techniques multiscale tools like wavelets and tensor product splines. In this talk we discuss a pair of nonlinear, multiscale Gaussian approximation problems. The first, considered with Amos Ron, features approximation by Gaussians at multiple, spatially varying scales, and provides correct rates for functions in standard smoothness spaces for nonlinear approxiation. The second, recently considered with Amos Ron and Wolfgang Erb, treats $N$term Gaussian approximation of cartoon class functions with rates comparable to those of curvelets and shearlets.
Thursday, 09.11.2017 13:00 (s.t), We6 6.020, Bastian Bohn (Universität Bonn)
On the best approximation error of energy sparse grids in $H^1$
In this talk, we present a specific variant of a sparse grid, which turns out to be optimal for $H^1$approximation of functions from the Sobolev space of bounded mixed smoothness $H^2_{\text{mix},0}([0,1]^d)$ which vanish on the boundary. We also discuss tractability and relations to other results in similar settings.
Thursday, 26.10.2017 13:00 (s.t), We6 6.020, Albert Cohen (Université Pierre et Marie Curie, Paris)
Data assimilation, parameter estimation and reduced modeling
This talk will address some recent mathematical advances on two problems which occur in parameter estimation in PDEs from observed data. The first one concerns the identifiability of the scalar nonconstant diffusion coefficient $a$ in a second order elliptic PDE from the observation of the full solution $u$ for a given right hand side $f$. Our main results show that for strictly positive right hand side, and $a$ belonging in certain smoothness classes, identification is possible with Hölder dependence of $a$ on $u$, and that Lipschitz dependence does not generally hold. The second problem concerns the recovery of the full solution $u$ from a finite number of linear measurements representing the observed data. Motivated by reduced modeling, the $a$priori additional information about $u$ is in the form of how well it can be approximated by a certain known subspace of given dimension (reduced bases, POD). Algorithms that yield near optimal recovery bounds are proposed. These results were obtained in collaborations with Peter Binev, Andrea Bonito, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, Gerrit Welper and Przemyslaw Wojtaszczyk.
16.02.2017 13:00 (s.t), We6 6.020, Florian Kruse (Graz)
Optimal control of semilinear parabolic equations by BVfunctions
This talk addresses parabolic optimal control problems in which the objective promotes controls that are piecewise constant in time. That is, optimal controls for these problems have a particularly simple structure. To achieve this we consider controls that are BVfunctions in time and use the total variation norm of their derivatives in the objective. This implies that the objective is nonsmooth. We prove existence of optimal controls, derive first and secondorder optimality conditions, provide error estimates, and develop a semismooth Newton method for their solution. Numerical examples are presented as well.
09.02.2017 13:00 (s.t), We6 6.020, David Krieg (Jena)
On the approximation of tensor product operators
Let $T$ be a bounded linear operator between two Hilbert spaces. We want to approximate $T$ by an algorithm that evaluates less than a given number $n$ of linear functionals. The minimal worst case error of such an algorithm is given by the $n$th approximation number of $T$. If $T$ is a tensor product operator, this is the $n$th largest number in the tensor product of the sequences of approximation numbers of its factors. I will talk about the asymptotic and preasymptotic behavior of tensor products of sequences of polynomial decay. The results will be applied to the $L_2$approximation of mixed order Sobolev functions on the $d$cube. It turns out that this problem is much harder for nonperiodic functions than for periodic functions, if $n\leq 2^d$. Asymptotically, however, there is no difference at all. This investigation is inspired by [1] and can be found in [2].
[1] T. Kühn, W. Sickel, T. Ullrich: Approximation of mixed order Sobolev functions on the dtorus – asymptotics, preasymptotics and ddependence. Constructive Approximation 42, 353–398, 2015.
[2] K.: Tensor power sequences and the approximation of tensor product operators. ArXiv eprints, 2016. arXiv:1612.07680 [math.NA]
02.02.2017 13:00 (s.t), We6 6.020, Joachim Rehberg (WIAS Berlin)
On optimal elliptic Sobolev regularity
In recent years it was anticipated by the community, in particular in optimal control, that real world problems are ‘nonsmooth’, i.e. the resulting elliptic/parabolic equations have to be considered on nonsmooth domains, often exhibit discontinuous coefficients or/and are to be complemented by mixed boundary conditions. That leads, in this generality, to the failure of many of the classical elliptic regularity results in these cases. The lecture shows up substitutes for these in many of the above outlined cases – and these are sharp.
19.01.2017 13:00 (s.t), We6 6.020, Glenn Byrenheid (Bonn)
Nonlinear approximation with respect to the FaberSchauder dictionary
We apply the theory of unconditional FaberSchauder characterizations of nonperiodic function spaces provided in [1] and extend it to Sobolev spaces $S^r_{p}W([0,1]^d)$ with dominating mixed smoothness. These characterizations yield intrinsic discretizations of $S^r_{p}W([0,1]^d)$ and Besov spaces $S^r_{p,\theta}B([0,1]^d)$ that allow us to study best $m$term approximation with respect to the FaberSchauder dictionary where the error is measured in $L_q([0,1]^d)$. Compared to linear (sampling) approximation in case $p < q$ the exact values of $p$ and $q$ do not affect the asymptotic rate. We obtain always $\sigma_m(S^r_pW([0,1]^d),\mathbb{F})_{L_q([0,1]^d)}\lesssim m^{r} (\log m)^{(d1)(r+\frac{1}{2})},$ especially in the case $q=\infty$. For mixed Besov spaces $S^r_{p,\theta}B([0,1]^d)$ with $\theta<1$ we prove in the smoothness range $\frac{1}{p} < r < \min\{\frac{1}{\theta}1,2\}$ the purely polynomial rate $\sigma_m(S^r_{p,\theta}B([0,1]^d),\mathbb{F})_{L_q([0,1]^d)}\lesssim m^{r}.$ Note all our results are obtained by constructive algorithms using coefficients that are generated by a finite number of function evaluations. This is joint work with Tino Ullrich.
[1] Triebel, Hans . Bases in function spaces, sampling, discrepancy, numerical integration. EMS Tracts in Mathematics, 11. European Mathematical Society (EMS), Zürich, 2010. x+296~pp. ISBN: 9783037190852
12.01.2017 13:00 (s.t), We6 6.020, Constantin Diez (Adam Opel AG, Rüsselsheim)
Knowledge Extraction from Crash Simulations
In order to understand the crash behavior of cars before expensive tests take place, virtual robustness investigations get more important in the development process. The evaluation is most commonly done by watching solely system responses, such as intrusion or dummy criteria, with simple statistical measurements such as mean or variance.
In order to extract more information in an even more compressed way, machine learning methods can be used to help engineers understand their simulations much faster and also give hints for improvement. Therefore a dimension reduction scheme was specially designed for crash simulations and enables cheap storage of results, as well as fast comparison. Since this reduction enables the computation of simulation similarity, clustering methods in combination with lowdimensional space embedding can be used to find groups of simulations with similar buckling behavior. Finally to avoid or reproduce certain deformation patterns, solutions in terms of variable ranges can be computed with decision tree learning, so that an engineer knows in which areas he may choose a design.
15.12.2016 13:00 (s.t), We6 6.020, Christian Pfeiffer (Bonn)
SpaceTime discontinuous Galerkin methods for parabolic optimal control problems
This presentation applies the spacetime DGMethod of Neumueller to parabolic optimal control problems. The talk includes theoretical results (error estimates for distributed and boundary control problems) as well as numerical experiments.
08.12.2016 13:00 (s.t), We6 6.020, Veronika Karl (Würzburg)
An Augmented Lagrange Method for elliptic state constrained optimal control problems
We want to solve a convex optimal control problem with an elliptic state equation and a pointwise state constraint. Because of the low regularity of the Lagrange Multipliers, pointwise state constraints cause several difficulties in their numerical treatments and are therefore still challenging. In this talk we want to apply an adapted Augmented Lagrange Method in order to eliminate the explicit given state constraints. Using a special update rule our algorithm is not only yielding strong convergence of the solution as well as weak convergence of the adjoint states but also boundedness of the multiplier associated to the state constraint and its weak* convergence.
01.12.2016 13:00 (s.t), We6 6.020, Bart Vandereycken (Geneva)
Robust methods for computing extremal points of real pseudospectra
We introduce a crisscross type algorithm to compute the rightmost point in the real pseudospectrum of a given matrix, that is, the set of eigenvalues of all real, normbounded, additive perturbations of this matrix. Existing methods, which are based on eigenvalue optimization over the rank2 matrix manifold using steepest descending schemes, may converge to only locally rightmost solutions and their convergence can be slow. To avoid these difficulties, we formally extend the crisscross algorithm originally developed for unstructured pseudospectra and to reduce its computational cost, we exploit a supersets characterisation of the real pseudospectrum. Each criss and cross search is performed on carefully selected supersets, and involves solving only linear eigenvalue problems and singular value optimization problems. Furthermore, we also propose a subspace projection framework, which combines the crisscross algorithm with subspace projection techniques to make the algorithm applicable to largescale matrices. The developed algorithms are proven to be globally convergent and their robustness and efficiency are demonstrated by a number of numerical examples.
Joint work with Ding Lu, University of Geneva.
17.11.2016 13:00 (s.t), We6 6.020, Lucas Bonifacius (TU München)
Strong stability of linear parabolic timeoptimal control problems
The goal of this talk is to derive qualified optimality conditions for a class of linear timeoptimal control problems with general convex target set. To this end, we state sufficient conditions for strong stability that in turn guarantees the existence of Lagrange multipliers for this state constrained optimal control problem. The theory is based on a characterization of weak invariance of the target set under the controlled equation. An appropriate strengthening of the resulting Hamiltonian condition guarantees strong stability and yields a priori bounds on the size of multipliers, independent of, e.g., the initial point, the running cost or the a priori unknown optimal solution.
03.11.2016 13:00 (s.t), We6 6.020, Constantin Weiser (Düsseldorf)
Numerical Quadrature in Econometrics – Two Examples
Integration is the key operator in stochastic calculus. It‘s used whenever a probability for a continuous random variable has to be computed. In practical applications, these integrals are very often multidimensional and no closed form solution exist. Therefore they have to be approximated numerically. Due to historical developments, MonteCarlo methods has extended its dominance over other numerical methods like numerical quadrature.
We present two highly relevant econometric models, where we use numerical quadrature for computing integrals within an optimization process: the probitmodel and the random coefficient logit model. For the first we propose a highly customized sparse grid approach for which we find an impressive improvement in terms of efficiency (accuracy & computational cost) relative to the state of the art quasi Monte Carlo methods. The second model suffers from a strong interrelationship between the integrals. Therefore, an adaptive quadrature algorithm leads to high computational costs and the standard sparsegrids approach does not work better than the standard Monte Carlo approach.
We find in general the need for customization of quadrature methods to be competitive with MonteCarlo methods.
20.10.2016 13:00 (s.t), We6 6.020, Markus Bachmayr (Bonn)
Representations of Gaussian random fields and sparse Hermite approximation of lognormal diffusion problems
We consider the convergence of sparse Hermite expansions of solutions of elliptic PDEs with lognormally distributed diffusion coefficients. An important ingredient is the choice of an expansion of the Gaussian random field in terms of independent scalar Gaussian random variables, which can be interpreted as a choice of stochastic coordinates. Here it turns out that multileveltype expansions can lead to better results than KarhunenLoève decompositions. We establish convergence rates and moreover consider the construction of such multilevel expansions for the important case of Matérn covariances. This talk is based on joint works with Albert Cohen, Ronald DeVore and Giovanni Migliorati.
14.07.16 13:00 (s.t), We6 6.020, Jens Henrik Göbbert (FZ Julich)
Lowering the Barriers to InSitu Visualization
Extracting scientific insight from large simulations is of crucial importance for science. Scientific advances are made only once the data produced by the simulations is processed into a meaningful analysis. But as simulations increase in size for more detail, postprocessing is becoming the bottleneck on the way to the desired scientific insight. In situ techniques, like in situ visualization, are known to play an important part in tackling this problem. Sparing some supercomputing time to process, structure, reduce and visualize the data in realtime during the simulation offers several benefits. In particular, when data reduction becomes inevitable, only during the simulation all relevant data about the simulated fields and any embedded geometry is readily available at the highest resolution and fidelity for critical decision making. But even though in situ visualization has been successfully deployed over more than two decades, its use still has not gone “mainstream”. Scientists today often have to abstain from meaningful in situ visualizations of their simulation data. This talk will give an introduction to in situ visualization and discuss stateoftheart implementation, optimization and coupling techniques to integrate the needed functionality to each simulation on the basis of VisIt/Libsim and ParaView/Catalyst.
21.07.16 13:00 (s.t), We6 6.020, Bashar Ibrahim
Mathematical Modeling and Simulation of DNA segregation
Correct DNA segregation is a fundamental process that ensures the faithful inheritance of genomic information for the propagation of cell life. Segregation failures underlie many human health problems, most notably aneuploidy and cancer. DNA segregation, is inherently complex and highly cross linked. It cannot be understood from reflections on the individual components (proteins, complexes etc) alone, but should be understood through considerations involving many components at the same time. The current experimental techniques are not sufficient to make quantitative predictions. Thence, the integration of experimental and mathematical approaches is employed for understanding of biological systems. In this talk, examples will be presented and mathematical challenges will be discussed. First challenge is how can we distinguish between active transport and diffusion of PDEs of cellular processes given various data like lifecell imaging. Additionally, how can we make wholecell computational model for DNA segregation.
30.06.16 13:00 (s.t), We6 6.020, Friederike Hellwig (HumboldtUniversität zu Berlin)
LowOrder Discontinuous PetrovGalerkin Finite Element Methods for Linear Elasticity
Since the design of pointwise symmetric stress approximations in H(div, Ω; S) for linear elasticity is cumbersome, especially in 3D, the discontinuous PetrovGalerkin methodology promises a loworder symmetric stress approximation. In this talk, we use the ultraweak formulation to introduce three new methods with piecewise constant ansatz functions for the displacement and the stress variable and piecewise affine and continuous interface displacements and piecewise constant interface tractions. The three methods for linear elasticity differ from each other in the choice of norms and the occurence of some constraint. The minimal test space of piecewise (and, in general, discontinuous) symmetric parts of lowestorder RaviartThomas functions and piecewise affine functions allow for a direct proof of the discrete infsup condition and a complete a priori and a posteriori error analysis which is robust in the incompressible limit as λ → ∞. Numerical experiments with uniform and adaptive meshrefinings investigate λrobustness and confirm that the third scheme is lockingfree.
23.06.16 13:00 (s.t.), We6 6.020, Reinhold Schneider (TU Berlin)
Hierachical JohnsonLindenstrauß transform as a randomized hierarchical singular value decomposition
Hierarchical Tucker tensor format and Tensor Trains (TT) have been introduced to offer stable and robust approximation by a loworder cost. The hierarchical singular value decomposition (HSVD) introduced independently by Hackbusch & Kühn and Oseledets and analyzed by Grasedyck, provides a reliable tool for quasi best tensor product approximation. It can be used in hard/soft thresholding iterations (Bachmayr et al.) providing guaranteed convergence. To reduce the operational cost of of the SVDs, we consider randomized SVD in the sense of Tropp et al. which is based on JohnsonLindenstrauß transform. We introduce a randomized HSVD computation by extending these ideas to hierarchical tensors.
09.06.16 13:00 (s.t), We6 6.020, Peter Oswald (Bonn)
Subspace Correction Methods: Randomization and Acceleration
Continuing on a topic which I already reported on in 2014 in the same seminar, I will survey some more recent developments in the theory of subspace correction methods. The main issues are new randomization strategies (order of subspace access and grouping of subspaces) and acceleration (similar to what we gain if we go from gradient to conjugate gradient methods). I will spend some time on an interesting problem connecting quantitative estimates for matrix paving (the recently proved Anderson Paving Conjecture) with choosing the best equation ordering for the SOR (GaussSeidel) iteration.
02.06.16 13:00 (s.t), We6 6.020, Thomas Wick (RICAM)
Goal function evaluations and adaptive finite elements for phasefield fracture problems
Currently, fracture propagation and multiphysics problems are major topics in applied mathematics and engineering. It seems to turn out that one of the most promising methods is based on a variational setting (proposed by Francfort and Marigo) and more specifically on a thermodynamically con sistent phasefield model. Here a smoothed indicator function determines the crack location and is characterized through a model regularization pa rameter. In addition, modeling assumes that the fracture can never heal, which is imposed through a temporal constraint, leading to a variational inequality system. In this talk, this basic fracture model is augmented with several hints and discussions of serious challenges in developing stateoftheart numeri cal methods. Key aspects are robust and efficient algorithms for imposing the previously mentioned crack irreversibility constraint, treatment of the indefinite Jacobian matrix in terms of a monolithic errororiented Newton approach, computational analysis of the interplay of model and discretiza tion parameters, and finally coupling to other multiphysics problems such as fluidfilled fractures in porous media, fluidstructure interaction, and steps towards high performance computing for tackling practical field problems. Our focus in these developments will be on goal functional evaluations, i.e., certain quantities of interest. This aim is accomplished with adap tive finite elements and more specifically, on the one hand, with predictor corrector mesh adaptivity and, on the other hand, using partitionofunity dualweighted residual a posteriori error estimation. Together with high per formance computing, mesh adaptivity allows for systematic investigations of the previously mentioned problem settings.
19.05.16 13:00 (s.t), We6 6.020, Paul Sinz
Applications of a Spectral Problem: Maximum Energy Penetration Inside Composite Structures and MSGFEM
A systematic method for identifying the worst case load amongst all boundary loads of a fixed energy is introduced. Here the worst case load delivers the largest fraction of input energy into a prescribed subdomain of interest. This leads to an eigenvalue problem, for which the largest eigenvalue is the maximum fraction of energy which can penetrate the subdomain. The associated eigenfunctions are the worst case solutions. The properties of these eigenfunctions motivates a particular Generalized Finite Element Method (GFEM) called the Multiscale Spectral GFEM (MSGFEM), developed by Babuˇska and Lipton (2011). The MSGFEM method and some computations are discussed.
19.05.16 14:00 (s.t), We6 6.020, Gabriel Barrenechea
Stabilised finite element methods in anisotropic quadrilateral meshes
In this talk I will review some recent results on the stabilisation of finite element methods in anisotropic quadrilateral meshes. Our first attempt is the family $Q_{k+1}\times P_{k1}$. This pair is infsup stable, but their stability constant depends on the aspect ratio of the triangulation. Then, we identify the minimal amount of pressure modes which are responsible for this behaviour, and we propose a method that penalises them in such a way that the resulting scheme is stable independently of the aspect ratio. Next, we move to the, optimal and not infsup stable, $Q_1\times P_0$ pair, and the Oseen equation.
28.04.16 13:00 (s.t), We6 6.020, Christopher Kacwin (Bonn)
On orthogonality of the ChebychevFrolov lattice and applications
As a representative of hyperbolic cross methods, the Frolov cubature formula is currently gaining center stage in the research of numerical inte gration, since it has optimal convergence rates for a wide range of function spaces, especially the Besov and TriebelLizorkin spaces with dominating mixed smoothness. However, finding the integration nodes for this cuba ture formula has proven to be quite difficult with increasing dimension. We introduce algorithms on this problem and will deal with the special case of FrolovChebychev lattices, which surprisingly turn out to be or thogonal. Finally, we will compare the worst case error of this method to other uptodate integration schemes. This is a joint work with J. Oettershagen and T. Ullrich.
21.03.16 13:00 (s.t), We6 6.020, Ilyass Tabiai (Montreal)
Damage initiation and propagation in fiber reinforced composites: observation and modeling
An overview of Fiber Reinforced Composites’ (FRCs) manufacturing an its consequences on the properties of the cured material will be presented. FRC are increasingly used for primary structures in the aerospace industry. FRC’s damage starts at the fiberlevel then grows through the heteroge neous material. Quantitative data of the deformation fields in the fiber’s vicinity during damage are essential to calibrate simulation models able to handle simultaneous initiation and growth of several damage mechanisms. After analysis of the results, an inplane displacement resolution of ±0.1μm can be reached. Results from such experiments will be presented. A 1D Peridynamics’ based Prony series viscoelastic constitutive model has been developed. Such a model can be implemented into the existing Peridigm opensource software for Peridynamics’ simulations. Fullfield measurements have been prepared for a 1D fiber, a 2D film and a 3D dogbone specimen under quasi static tensile loading. A Peridynamics inverse method based on the quan tification of energy available for each material point can be developed to implement and calibrate an energetical damage model.
31.03.16 13:00 (s.t), We6 6.020, Mario Ullrich (JKU Linz, Österreich)
On a Monte Carlo method for smooth functions
We present error bounds for a lattice rule for numerical integration equipped with a random shift and a random dilation of the point set. In particular, we show that the error of this method can be bounded by the L_2norm of the Fourier transform away from the origin. This implies, e.g., that the worstcase error for mixed Sobolev spaces H^s_p with p>2 and s>0 is of the order n^{s1/2} without additional logfactors. Hence, the order is independent of the dimension. This seems to be the first nontrivial example where randomization can improve the performance of an algorithm by more than n^{1/2}.
17.03.16 13:00 (s.t), We6 6.020, Yurii Kolomoitsev (NAS, Kiev)
Sharp Ulyanov inequalities in the spaces $L_p$, $p>0$
In the talk we will discuss recent progress in investigation of the problem stated in 60s by Ulyanov: to find sharp inequalities between moduli of smoothness in the different Lebesgue spaces. We consider the general sharp Ulyanov inequalities for moduli of smoothness, generalized $K$functionals, and their realizations in the spaces $L_p(\mathbb{R}^d)$. The main feature of our work is that we deal with the Ulyanov inequalities in the case $0 < p < 1$ and in the limit cases $p = 1$ and $p=\infty$.
17.03.16 14:00 (s.t), We6 6.020, Bangti Jin (UCR)
Variational formulation of problems involving fractional order differential operators
In this talk, we consider boundary value problems involving either Caputo or RiemannLiouville fractional derivatives of order $\alpha\in (1,2)$ on the unit interval (0,1). The variational problem for the RiemannLiouville case is coercive on the space $H^{\alpha/2}_0$ but the solutions are less regular, whereas that for the Caputo case involves different test and trial spaces. We establish the regularity pickup of the solutions of the variational problem, which enables one to establish convergence rates of the finite element approximations. Finally, numerical results are presented to illustrate the error estimates.
10.03.16 13:00 (s.t), We6 6.020, Sergey Tikhonov (CRM, Barcelona)
Polynomial inequalities with weights
In this talk we will discuss recent developments in the theory of polynomial inequalities with weights. An overview of the problem will be given. The talk does not require any prior knowledge of this topic.
10.03.16 14:00 (s.t), We6 6.020, Fabian Franzelin (Stuttgart)
On probabilistic transformations and optimal sampling rules for emulator based inverse uncertainty quantification problems
The generalized polynomial chaos expansion (gPCE) is a very attractive technique for uncertainty quantification. However, it assumes model inputs that are independent random variables. For nonindependent random variables the basis polynomials of the gPCE are no longer orthogonal to the probability density, which decreases the efficiency of all PCE based methods. Moreover, they are not even applicable for datadriven problems, which arise in the context of data assimilation. A common way to overcome these issues are isoprobabilistic transformations, at least theoretically. This talk will discuss isoprobabilistic transformations such as the Nataf and the Rosenblatt transformation that decorrelate random variables for improving the efficiency of emulator based Bayesian inverse problems. We will present numerical studies that demonstrate the effectiveness of these transformations and present some alternatives such as Leja sequences in combination with arbitrary polynomial chaos expansion.
18.02.16 13:00 (s.t), We6 6.020, Winnifried Wollner (TU Darmstadt)
Optimization of partial differential equations subject to pointwise constraints on the gradient of the state
In many processes modeled by partial differential equations the size of derivatives plays a decicive role. Prototypical examples include problems of damage or plastificity. When optimizing such processes additional constraints on the size of the derivative need to be included. The analysis of such optimization problems is complicated by the mismatch between the natural topology for the derivative constraint and the, weaker, natural topology for solutions of the partial differential equation. The talk will be concernded with the existence of solutions to such problems as well as their approximation by numerical methods.
28.01.16 13:00 (s.t), We6 6.020, Ira Neitzel (Bonn)
Finite element error estimates for elliptic Dirichletboundary control problems with pointwise state constraints
PDE constrained optimal control problems with pointwise state constraints are known to cause certain theoretical and numerical difficulties. In this talk, we are concerned with an elliptic Dirichlet boundary control problem with low regularity of the state variable. After discussing first order optimality conditions in KKTform, we will focus on presenting a priori error estimates for the finite element discretization of such linearquadratic problems with pointwise state constraints in the interior of the domain.
This is joint work with M. Mateos.
21.01.16 13:00 (s.t), We6 6.020, Thomas Heller (Erlangen)
C++ on its way to exascale and beyond  The HPX Parallel Runtime System
The High Performance Computing (HPC) community is facing a technology shift which will result in a performance boost by three orders of magnitudes within the next 5 years. This rise of performance will mainly be acquainted by increasing the level of concurrency in such a way that a user of those systems needs to accommodate to billion way parallelism. The main problems to solve are: Programmability, Portability, Energy Saving and Resiliency. The author believes that leveraging modern C++ will lead to a possible solution to those problems from a software perspective. This talk will discuss the use of C++ in such a massively parallel environment: Using the HPX Parallel Runtime System  a future based API   to present a lightweight and efficient mechanism to support massively parallel, multiway parallelism.
14.01.16 13:00 (s.t), We6 6.020, Johannes Holke (Bonn)
Dynamic loadbalancing on adaptive tetrahedral meshes
Loadbalancing is essential to ensure scalability of parallel numerical applications. On adaptive meshes that are refined and coarsened frequently during the computation (i.e. in every time step) spacefilling curves have been established as a fast and reliable tool for this task. We introduce a spacefilling curve for adaptive triangular and tetrahedral meshes that can be computed using bitwise interleaving operations similar to the wellknown Morton curve for cubical meshes. Using this spacefilling curve we establish constant time algorithms to compute the parent, children and faceneighbors of a given mesh element, as well as the next and previous element in the spacefilling curve.
17.12.15 13:00 (s.t), We6 6.020, Carsten Carstensen (HU Berlin)
Axioms of Adaptivity: Rate optimality of adaptive algorithms with separate marking
03.12.15 13:00 (s.t), We6 6.020, Dietrich Braess (Bochum)
A posteriori error estimates for discontinuous Galerkin methods by the twoenergies principle
The twoenergies principle admits to achieve reliable a posteriori error bounds without unknown constants. They are also efficient. The error bounds are probust while residual error estimates are not. We establish these estimates for discontinuous Galerkin methods (DG) that are now very popular. We use the uniform treatment by Arnold et al with a mixed method to construct equilibrated fluxes. The construction by a local procedure is here simpler than in the case of conforming Lagrange elements. This is a hint to other advantages of DG. We focus on the Poisson equation and comment the more involved treatment of the biharmonic equation.
26.11.15 13:00 (s.t), We6 6.020, V. N. Temlyakov (U South Carolina, Columbia)
Incoherent systems and covering in finite dimensional Banach spaces
We discuss construction of coverings of the unit ball of a finite dimensional Banach space. The well known technique of comparing volumes gives upper and lower bounds on covering numbers. This technique does not provide a construction of good coverings. Here we study incoherent systems and apply them for construction of good coverings. We use the following strategy. First, we build a good covering by balls with a radius close to one. Second, we iterate this construction to obtain a good covering for any radius. We mostly concentrate on the first step of this strategy.
19.11.15 13:00 (s.t), We6 6.020, Emil Kieri (Uppsala)
Dynamical lowrank approximation in the presence of small singular values
We consider lowrank approximations to large timedependent matrices and tensors. When singular values in the lowrank approximation tend to zero the curvature of the approximation manifold grows, forcing standard timestepping schemes to take very small time steps. This is a situation one has to expect. Novel time integrators, based on splitting the projection onto the tangent space of the lowrank manifold, were recently proposed. Standard error estimates for splitting methods break down when the singular values are small, since the Lipschitz constant of the projection then is large. On the other hand, the integrators are exact when given timedependent matrices or tensors already are of the prescribed rank. We provide an error analysis which unifies these properties. We show that in cases where the exact solution is an epsilonperturbation of a lowrank matrix or tensor train, the error of the integrator is favourably bounded in terms of epsilon and the stepsize, independently of the smallness of the singular values. Such a result does not hold for any standard integrator.
12.11.15 13:00 (s.t), We6 6.020, Thomas Kühn (Leipzig)
Approximation in multivariate periodic Gevrey spaces
The classical Gevrey classes, already introduced in 1916, play an important role in analysis, in particular in the context of PDEs. They consist of $C^\infty$functions on $\mathbb{R}^d$ whose derivatives satisfy certain growth conditions. For periodic functions, these growth conditions can be expressed in terms of Fourier coefficients. Using this approach, I will introduce periodic Gevrey spaces $G^{s,c}(\mathbb{T}^d)$ on the $d$dimensional torus $\mathbb{T}^d$, where $s\in(0,1)$ is a smoothness parameter and $c>0$ a fine index. All these spaces contain nonanalytic functions, so they are between analytic functions and functions of finite smoothness. The talk is devoted to the study of approximation numbers $a_n$ of the embeddings $G^{s,c}(\mathbb{T}^d)\hookrightarrow L_2(\mathbb{T}^d)$, with special emphasis on the dependence of the “hidden” constants on the dimension $d$. In particular, the exact asymptotic rate of $a_n$ as $n\to \infty$ is determined. Not surprisingly, this rate is subexponential and faster than polynomial. We also find preasymptotic twosided estimates for small $n$. In high dimensions this is the most relevant case for numerical purposes. These results are based on joint work with Martin Petersen (Leipzig).
05.11.15 13:00 (s.t), We6 6.020, Glenn Byrenheid (Bonn)
Nonoptimality of rank1 lattice sampling in spaces of mixed smoothness
We consider the approximation of functions with hybrid mixed smoothness based on rank1 lattice sampling. We prove upper and lower bounds for the sampling rates with respect to the number of lattice points in various situations and achieve improvements in the main error rate compared to earlier contributions to the subject. This main rate (without logarithmic factors) is half the optimal main rate coming for instance from sparse grid sampling and turns out to be best possible among all algorithms taking samples on rank1 lattices.
29.10.15 13:00 (s.t), We6 6.020, Christian Kuske (Bonn)
MultiFrequency Analysis (Combining $\mathcal{H}$Matrices and MACA)
It is hard to find an approximation of the Helmholtz operator for an arbitrary frequency range. $\mathcal{H}$Matrices are feasible for fixed frequencies only. The aim of our approach is a method which is capable of generating an approximation for a frequency range out of few chosen frequencies. The basic idea originates in the socalled multivariate adaptive cross approximation (MACA). This method is a generalization of ACA to multivariate functions, preserving the logarithmiclinear complexity in both runtime and storage. MACA constructs a lowrank tensor approximation for the entire frequency range by approximating the chosen frequencies with $\mathcal{H}$Matrices. The resulting system can be solved with an adjusted tensorvalued Krylov subspace method and an available preconditioner.
18.06.15 13:00 (s.t), We6 6.020, David Molitor (Bonn)
Randomized dimensionality reduction for binary classification
We want to compare the success rate of a classification algorithm on data in $\mathbb{R}^d$ with the success rate on the same data, which has previously been projected onto $\mathbb{R}^k$, $k\leq d$. To do this, we will use JohnsonLindenstrauß embeddings and Support Vector Machines. Finally, we want to compare our numerical results with theoretic bounds proved in the recent paper “Compressive classification and the rare eclipse problem” by Bandeira, Mixon and Recht.
11.06.15 13:00 (s.t), We6 6.020, Dr. Andreas Mueller (Naval Postgraduate School, Monterey, CA, USA)
Parallel performance of the atmospheric model NUMA using elementbased Galerkin methods
Even with today’s supercomputers it is still impossible to explicitly represent all scales involved in weather prediction. A large number of different numerical methods have been developed for improving the efficiency of numerical simulations. Among those methods are the elementbased Galerkin methods which allow arbitrary high order for the spatial discretization and promise excellent scalability on large supercomputers.
The author is one of the developers of the dynamical core NUMA which is used within the nextgeneration weather prediction model NEPTUNE of the US Navy. NUMA incorporates many recent developments like elementbased Galerkin methods and adaptive mesh refinement. The presentation gives an introduction to the solution of the Euler equations of gas dynamics with continuous (spectral element) and discontinuous Galerkin methods.
Recently, the author has shown strong scaling of NUMA on more than 2.7 million threads using the continuous Galerkin method. Two test cases are considered: 1) a baroclinic instability test case by Jablonowski and Williamson (2006) on the sphere using a cubed sphere mesh and 2) a 3D rising thermal bubble test case in a cube. Details of this scalability study are presented and the sensitivity of the results with respect to the polynomial order of the spatial discretization is discussed.
28.05.15 13:00, We6 6.020, Sebastian Mayer (Bonn)
The effect of sparsity and relaxations thereof in certain function approximation problems
Sparsity proved to be a highly useful property in modern numerical analysis, for instance, in the context of compressive sening or sparse grid methods. In this talk, I consider what effects sparsity assumptions and relaxations thereof have in connection with the reconstruction of multivariate functions from partial information. Here, I am particularly interested in understanding the effects from the viewpoint of informationbased complexity. Two model function classes will be in the focus of the talk. On the one hand, we study classes of ridge functions $f(x) = g(a \cdot x)$, where the ridge direction $a$ is assumed to be compressible. On the other hand, we consider periodic Sobolev classes, where Fourier frequency vectors are assumed to be compressible or to lie in a hyperbolic cross.
21.05.15 13:00, We6 6.020, Alfonso Caiazzo (WIAS Berlin)
Multiscale modeling of weakly compressible elastic materials in harmonic regime
This talk focuses on the modeling of elastic materials composed by an incompressible elastic matrix and small compressible gaseous inclusions, under a time harmonic excitation. In a biomedical context, this model describes the dynamics of a biological tissue (e.g. liver) when wave analysis methods (such as Magnetic Resonance Elastography) are used to estimate tissue properties. Due to the multiscale nature of the problem, direct numerical simulations are prohibitive. First, we describe an homogenized model for the time harmonic regime which describes the solidgas mixture as a compressible material in terms of an effective elasticity tensor. As next, we derive and validate numerically analytical approximations for the effective elastic coefficients in terms of macroscopic parameters only. This simplified description is used to to set up an inverse problem for the estimation of the tissue porosity, using the mechanical response to external harmonic excitations.
07.05.15 13:00, We6 6.020, Daniel Elfverson (University of Uppsala)
A multilevel subset simulation for estimation of rare events
We want to estimate the probability that a functional is below or above some critical threshold. Standard Monte Carlo is unfeasible due to the huge amount of samples that are needed to produce even a single rare event. Remedies for this are using variance reduction techniques, e.g., subset simulation. In this work we consider a multilevel subset simulation approach with a varying number of samples on the different levels. The key idea in our new method is to use a posteriori error estimators to guarantee the critical subset property that may be violated when changing the model resolution from one failure set to the next.
07.05.15 13:30, We6 6.020, Fredrik Hellman (University of Uppsala)
A multilevel Monte Carlo method for failure probabilities
We analyze the multilevel Monte Carlo (MLMC) method for the irregular failure probability functional. The failure probability is the value of the cumulative distribution function for the quantity of interest at a given point. The low regularity makes MLMC perform suboptimally. The present work uses a posteriori error estimates for the quantity of interest to regain optimal performance. The issue of estimating the numerical bias of the estimator (which is an essential but difficult part of the MLMC algorithm) is also discussed.
23.04.15 13:00, We6 6.020, Lutz Gross (University of Queensland)
On the Solution of Largescale Geophysical Inversion using Finite Element Methods on Parallel Computers
The inversion of geophysical data is the attempt to build a threedimensional model of the Earth’s subsurface from data collected on or near the surface. The problem can be formulated as an optimization problem minimizing the defect of measurements and corresponding predictions subject to constraints from physical models in the form of partial differential equations. It is common practice to solve the problem using a ‘first discretize then optimize’ approach.. Although being very efficient for small to medium size problems the approach has strong limitations for largescale data sets and nonquadratic inversion problems for instance for electromagnetic data sets and joint inversion problems. In the talk we will present an alternative framework tailored for using the finite element method following the ‘first optimize then discretize’ approach. As this approach maintains the problem sparsity throughout the inversion the calculations can easily be applied to large spatial grids in parallel across thousands of processor cores with good scalability. We will discuss mathematical and computational aspects of the approach in the context of the inversion of magnetic and gravity anomaly data. As an application scenario we will present the inversion of satellite gravity data over the entire Australian continent.
12.03.15 14:00, We6 6.020, Vladimir N. Temlyakov (South Carolina)
Greedy algorithms in compressed sensing: Lebesguetype inequalities for Chebyshev Greedy Algorithm smoothness
We study sparse representations and sparse approximations with respect to redundant dictionaries and bases. We address the problem of designing and analyzing greedy methods of approximation. A key question in this regard is: How to measure efficiency of a specific algorithm? Answering this question we prove the Lebesguetype inequalities for algorithms under consideration. A very important new ingredient of the talk is that we perform our analysis in a Banach space instead of a Hilbert space. It is known that in many numerical problems users are satisfied with a Hilbert space setting and do not consider a more general setting in a Banach space. There are known arguments that justify interest in Banach spaces. In this talk we give one more argument in favor of consideration of greedy approximation in Banach spaces. We introduce a condition on a dictionary in a Banach space which is a generalization of the RIP concept in a Hilbert space. We analyze the Weak Chebyshev Greedy Algorithm, which is a generalization of the Orthogonal Greedy Algorithm (Orthogonal Matching Pursuit) for Banach spaces. We demonstrate the power of this method on several examples, including the trigonometric system.
12.03.15 15:00, We6 6.020, Dinh Dung (Hanoi)
Hyperbolic cross approximation in infinite dimensions
We give tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed SobolevKorobovtype smoothness and mixed Sobolevanalytictype smoothness in the infinitedimensional case where specific summability properties of the smoothness indices are fulfilled. These estimates are then applied to the linear approximation of functions from the associated spaces in terms of the $\varepsilon$dimension of their unit balls. Here, the approximation is based on linear information. Such function spaces appear for example for the solution of parametric and stochastic PDEs. The obtained upper and lower bounds of the approximation error as well as of the associated $\varepsilon$complexities are completely independent of any dimension. Moreover, the rates are independent of the parameters which define the smoothness properties of the infinitevariate parametric or stochastic part of the solution. These parameters are only contained in the order constants. This way, linear approximation theory becomes possible in the infinitedimensional case and corresponding infinitedimensional problems get tractable. This is joint work with M. Griebel (Bonn).
02.03.15 14:00, We6 6.020, Wolfgang Hackbusch (MPI Leipzig and CAU Kiel)
Recursive LowRank Truncation of BlockStructured Matrices
The best approximation of a matrix by a lowrank matrix can be obtained by the singular value decomposition. For largesized matrices this approach is too costly. Instead one may use a block decomposition. Approximating the small submatrices by lowrank matrices and agglomerating them into a new, coarser block decomposition, one obtains a recursive method. The required computation work is $O(rnm)$ where $r$ is the desired rank and $n\times m$ is the size of the matrix. The paper discusses the new error estimates for $AB$ and $MA$ where $A$ is the result of the recursive truncation applied to $M$, while $B$ is the best approximation.
22.01.15 13:00, We6 6.020, Dirk Plüger und Fabian Franzelin (Stuttgart)
Sparse Grids: challenges from applications
In the first part of the presentation we will give an overview on typical challenges for sparse grid methods that are posed by applications. Think of blackbox simulation codes on future exascale machines where one has limited access to code and numerics. Think of surrogate modeling for optimization or uncertainty quantification where every function evaluation is costly, or of datadriven problems where the amount of data is limited. These problems impose new requirements to sparse grids.
From data to uncertainty: an integrated sparse grid approach
In the second part we present a novel datadriven approach to propagate uncertainty. We use an integrated sparse grid approach for the uncertainty quantification pipeline from data to the stochastic analysis of the quantity of interest of some physical model. The integrated sparse grid approach gives us two main advantages: First, we can easily link the density estimation to the adaptive sparse grid collocation method to propagate uncertainty. Second, we can efficiently estimate the moments of the sparse grid surrogate without any additional approximation errors. We evaluate the new approach using an expensive subsurface flow simulation.
15.01.15, We6 6.020, Philipp Morgenstern (Bonn)
Analysissuitable adaptive Tmesh refinement with linear complexity
At present, the main interest in IGA is in finding discrete function spaces that integrate well into CAD applications and, at the same time, can be used for Finite Element Analysis. One of the most promising approaches are Tsplines, introduced as a freeform geometric technology in 2003. We present an efficient adaptive refinement procedure that preserves analysissuitability of the Tmesh, this is, the linear independence of the Tspline blending functions. We prove analysissuitability of the overlays and boundedness of their cardinalities, nestedness of the generated Tspline spaces, and linear computational complexity of the refinement procedure in terms of the number of marked and generated mesh elements.
08.01.15, We6 6.020, André Uschmajew (Bonn)
On lowrank approximabilty of solutions to Kroneckerstructured operator equations
Let $A$ and $B$ be invertible, and let $C$ have low rank. Then it is trivial that the rank of the solution to the matrix equation $A X B^T = C$ is not larger than the rank of $C$. Formally speaking, this matrix equation admits a lowrank solution. There is some numerical evidence that the solution of frequently occurring linear equations like $A_1 X B_1^T + \dots + A_R X B_R^T = C$, where the number of terms is still considerably smaller than the matrix dimension (which might be infinite in an Hilbert space setting), has good lowrank approximability, that is, fast decaying singular values. Exponential decay, as sometimes observed in practice, seems very challenging to prove. It is, however, rather easy to obtain a nontrivial algebraic decay rate by relating the convergence speed of some polynomial approximation to the Kronecker rank of the corresponding operator polynomial. For an eigenvalue problem $AXB^T = \lambda X$ a similar argumentation is possible, although with considerable restrictions. This simple method of estimating the lowrank approximation error for matrices has important consequences for the lowrank approximability of solutions to Kroneckerstructured operator equations in higherorder tensor product spaces, as it provides estimates on the singular value decay of certain matrix unfoldings of the solution tensor, which in turn govern the error in higherorder SVD truncations. The talk is based on jointwork with Daniel Kressner.
18.12.14, We6 6.020, Beatrice Vedel (BretagneSud)
Hyperbolic multifractal analysis and application to textures classification
Multifractal analysis has been proposed to describe the complexity of roughness in numereous phenomena  such as turbulent flows or financial data. In the first part of the talk, I will introduce the classical multifractal analysis and show how it can be applied on real data. In the second part, I will present our last works on a hyperbolic multifractal analysis able to describe roughness in presence of anisotropy.
04.12.14, We6 6.020, Van Kien Nguyen (Jena)
Weyl numbers of embeddings of Besov spaces with dominating mixed smoothness
Let $X$ be a Banach space and $T:\, X\to X$ be a compact linear operator. By $(\lambda_n(T))_{n=1}^\infty$ we denote the sequence of all nonzero eigenvalues of $T$. The particular interest in Weyl numbers comes from the fact that they are the smallest known $s$number satisfying the famous Weyltype inequalities, i.e.,
$\Big( \prod_{k=1}^{2n1} \lambda_k (T)\Big)^{1/(2n1)} \le \sqrt{2e} \, \Big( \prod_{k=1}^{n} x_k (T)\Big)^{1/n}$ for all $n \in \mathbb{N}$.
The above inequality shows that Weyl numbers can be used to control the distribution of the eigenvalues of $T$. In this talk we shall give a sharp estimates of Weyl numbers embeddings of Besov spaces of dominating mixed smoothness into Lebesgue spaces.
27.11.14, Seyedehsomayeh Hosseini (ETH Zürich)
Subgradient algorithms for locally Lipschitz functions on Riemannian manifolds
This talk is concerned with the numerical solution of optimization problems defined on Riemannian manifolds where the objective function may be nonsmooth. Such problems arise in a variety of applications, e.g., in computer vision, signal processing, motion and structure estimation, or numerical linear algebra. We present a descent direction method for finding extrema of locally Lipschitz functions defined on Riemannian manifolds. Then, we establish the convergence of our algorithm to a stationary point.
20.11.14, Anna Persson (Göteborg)
Multiscale techniques for parabolic equations
We use local orthogonal decomposition techniques to derive a generalized finite element method for a parabolic equation with multiscale coefficients. A review of existing results for elliptic multiscale problems is given and we discuss how to extend these to the parabolic case. We conclude by outlining the proof of optimal order convergence rate of the error in the L_2norm (in space).
13.11.14, Mira Schedensack (HU Berlin)
Comparison results for firstorder FEMs
This talk establishes the equivalence of the conforming $P_1$ finite element method, the nonconforming CrouzeixRaviart finite element method, and several firstorder discontinous Galerkin finite element methods in the sense that the respective energy error norms are equivalent up to generic constants and higherorder data oscillations in a Poisson model problem. The RaviartThomas mixed finite element method is better than the previous methods whereas the conjecture of the converse relation is proved to be false.
Similar results hold for linear elasticity. The error of two nonconforming firstorder methods is bounded by the error of the conforming firstorder method up to a multiplicative constant which is robust with respect to the incompressible limit while the converse estimates deteriorate for the Lam'e parameter $\lambda\to\infty$. This reflects the wellknown locking behaviour of the conforming $P_1$ finite element method.
06.11.14, Henning Kempka (TU Chemnitz)
Function spaces with variable exponents
Lebesgue spaces with variable exponent $L_{p(\cdot)}$ have been introduced already in 1931 by Orlicz. The recent interest on these spaces is a result in the upcoming applications in the theory of nonnewtonian fluids and image processing. Surprisingly, Diening showed in 2004 that the HardyLittlewood maximal operator acts boundedly on $L_{p(\cdot)}$ which was the functional anaytic breakthrough and started a growing interest in variable exponent Lebesgue spaces. Subsequently, many scientists are nowadays interested in function spaces with variable exponents and a lot of other function spaces (Sobolev, Hölder, Besov, TriebelLizorkin, Hardy, Morrey, …) with variable integrability and variable smoothness are in consideration today. We focus on the basic definitions of $L_{p(\cdot)}$ and the BesovTriebelLizorkin spaces $B^{s(\cdot)}_{p(\cdot),q(\cdot)}$ and $F^{s(\cdot)}_{p(\cdot),q(\cdot)}$ with all three indices variable and show some important properties in these scales.
30.10.14, Dietmar Gallistl (Bonn)
Adaptive finite element approximation of eigenvalue clusters
The adaptive finite element approximation of multiple eigenvalues leads to the situation of eigenvalue clusters because the eigenvalues of interest and their multiplicities may not be resolved by the initial mesh. In practice, also little perturbations in coefficients or in the geometry immediately lead to an eigenvalue cluster. The first part of this talk presents an adaptive algorithm for eigenvalue clusters and shows optimal decay rates of the error in terms of degrees of freedom. Numerical tests suggest that the adaptive cluster approximation is superior compared to the use of an adaptive scheme for each eigenvalue separately, even if the multiplicities are known.
The optimality in terms of the concept of nonlinear approximation classes is concerned with the approximation of invariant subspaces spanned by eigenfunctions of an eigenvalue cluster. In order to obtain eigenvalue error estimates in the case of nonconforming finite element approximations, the second part of the talk presents new error estimates for nonconforming finite elements which relate the error of the eigenvalue approximation to the error of the approximation of the invariant subspace.
23.10.14, Holger Rauhut (RWTH Aachen)
Numerical solution of highdimensional parametric operator equations via compressive sensing
Compressive sensing enables accurate recovery of approximately sparse vectors from incomplete information. We apply this principle to the numerical solution of parametric operator equations where the parameter domain is highdimensional. In fact, one can show that the solution of certain parametric operator equations (parametric PDEs) is analytic in the parameters which can be exploited to show convergence rates for nonlinear (sparse) approximation. Building on this fact, we show that methods from compressive sensing can be used to compute approximations from samples (snapshots) of the parametric operator equation for randomly chosen parameters, which in turn can be computed by standard techniques including PetrovGalerkin methods. We provide theoretical approximation rates for this scheme. This is joint work with Christoph Schwab.
14.07.14, Toni Volkmer (TU Chemnitz)
Approximation of highdimensional multivariate periodic functions based on rank1 lattice sampling
We consider the approximation of multivariate periodic functions $f$ belonging to subspaces of the Wiener algebra. For the approximation of $f$, a trigonometric polynomial $p$ with frequencies supported on an index set $I\subset\mathbb{Z}^d$ is used and the Fourier coefficients of $p$ are computed by sampling the function $f$ along a rank1 lattice. For this computation, a fast algorithm can be used, which is based mainly on a single onedimensional fast Fourier transform, and the arithmetic complexity of the algorithm is $\mathcal{O}(I^2 \log I + dI)$. We show upper bounds for the approximation error and achieve an optimal order of convergence, when using suitable frequency index sets $I$. Numerical results are presented up to dimension $d=10$, which confirm the theoretical findings.
This is joint work with Lutz Kämmerer and Daniel Potts.
14.07.14, Lutz Kämerer (TU Chemnitz)
Sampling multivariate trigonometric polynomials and smooth periodic functions along rank1 lattices
The approximation of functions $f$ in $d$ spatial dimensions by sparse multivariate trigonometric polynomials supported on known frequency index sets $I\subset\mathbb{Z}^d$ is an important task with a variety of applications. We are interested in computing such an approximation from function values in a fast way. Rank1 lattices as sampling scheme allow for the fast approximation of the Fourier coefficients $\hat{f}_{\mathbf{k}}$, $\mathbf{k}\in I$, of $f$ using a single onedimensional fast Fourier transform.
For fixed arbitrary frequency index sets $I$ and suitable rank1 lattice size $M$, we apply a simple componentbycomponent construction in order to determine a generating vector $\mathbf{z}\in\mathbb{Z}^d$, such that sampling along the rank1 lattice $\Lambda(\mathbf{z},M)$ allows for the exact reconstruction of all trigonometric polynomials with frequencies supported on the index set $I$. We discuss necessary and sufficient conditions on the rank1 lattice size $M$ guaranteeing that our componentbycomponent strategy succeeds.
Once we determined a reconstructing rank1 lattice $\Lambda(\mathbf{z},M)$ for the frequency index set $I$, we can use this as sampling scheme in order to approximate smooth periodic functions $f=\sum_{\mathbf k\in\mathbb{Z}^d}\hat{f}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}$ by trigonometric polynomials $p=\sum_{\mathbf k\in I}\hat{p}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}$ with frequencies supported on $I$. Assuming the exact Fourier partial sum $S_If:=\sum_{\mathbf k\in I}\hat{f}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}$ of $f$ is a good approximation of $f$, the approximation $\tilde{S}_If:=\sum_{\mathbf k\in I}\hat{\tilde{f}}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ}$ of $f$, $\hat{f}_{\mathbf{k}}\approx\hat{\tilde{f}}_{\mathbf{k}}=\sum_{j=0}^{M1}f(\frac{j}{M}\mathbf{z})\mathrm{e}^{2\pi\mathrm{i}jM^{1}\mathbf{k}\cdot\mathbf{z}}$, $\mathbf{k}\in I$, is in some sense (almost) as good as the exact Fourier partial sum $S_If$.
Furthermore, we discuss the relation between the reconstruction of trigonometric polynomials and the approximation properties of the presented sampling method in detail. We show that the reconstruction property of the rank1 lattice is necessary in order to achieve approximations $\tilde{S}_If$ of $f$ that allow for upper bounds on $f\tilde{S}_If$ in the same order of the error bounds of $fS_If$ in general.
11.07.14, Peter Oswald (Jacobs University, Bremen)
Stable space splittings: New developments
I review the Hilbert space concept of stable space splittings and address its connection with some current research topics such as randomized POCS algorithms for leastsquares problems.
11.07.14, Weiqi Zhou (Jacobs University, Bremen)
Convergence rate of the classical Kaczmarz method
The Kaczmarz method (also referred to as “ART”) is an easy to implement and convergent iterative method for solving rectangular linear systems. The method carries out alternating orthoprojections and can be analyzed within the framework of POCS methods or using its interpretation as GaussSeidel method on an equivalent system
We review the development and main results of this method and present an upper bound for the convergence rate of its classic version, which, in contrary to common perception, turns out to be not much worse than the convergence rate estimates of its recently proposed randomized versions. Numerical examples about the impact of ordering on the convergence rates will also be presented if time permits
07.07.14, Jan Kleinert (Fraunhofer IWTM, Kaiserslautern)
Nonsmooth Rigid Body Dynamics – Simulating Granular Material
The field of Nonsmooth Contact Dynamics (NSCD) provides an increasingly popular simulation framework for granular material. In contrast to penaltybased Discrete Element Methods (DEM), this approach is stable for arbitrary time steps and produces visually acceptable results in very short computing time. Yet when it comes to the prediction of draft forces, NSCD relies on powerful numerical solvers.
This talk sets out to discuss NSCD as an alternative to DEM. The physical model behind NSCD is laid out, starting from Hamilton’s principle of least action to the discretized equations of motion. To integrate these equations in time, it is necessary to solve a large Cone Complementarity Problem (CCP) per time step. Two numerical methods to solve the CCP are introduced. The first is the Projected GaußJacobi solver, which is widely used for the simulation of granular material. But it suffers from slow convergence when applied to large systems. The second technique, an Interior Point Method based on Jordan algebras, exhibits better convergence properties whenever the accuracy requirements are high.
30.06.14, Lei Zhang (Shanghai Jiao Tong University)
Construction and analysis of consistent energy based atomistic/continuum coupling methods
We discuss the construction and numerical analysis of energy based atomistic/continuum coupling methods (A/C Coupling) for modeling crystalline solids with defects, in particular, the issues of consistency (socalled ‘ghost force removal’) and stability of the coupling method. For general multibody interactions on the 2D triangular lattice, we show that ghost force removal (patch test consistent) a/c methods can be constructed for arbitrary interface geometries[1]. Moreover, we prove that all methods within this class are firstorder consistent at the atomistic/continuum interface and secondorder consistent in the interior of the continuum region. The convergence and stability of the method is analyzed and justified with numerical experiments[2,3]. Development of optimal implementation for consistent methods is dicussed [3].
References:
[1] Construction and sharp consistency estimates for atomistic/continuum coupling methods with general interfaces: a 2D model problem, C. Ortner, L. Zhang, SIAM Numerical Analysis. Volume 50, Issue 6, pp. 29402965 (2012).
[2] (In)Stability and Stabilisation of QNLType AtomistictoContinuum Coupling Methods, C. Ortner, A. Shapeev, L. Zhang, 2013, Accepted by SIAM Multiscale Model. Simul.
[3] Implementation of Geometric Reconstruction Based AtomistictoContinuum Coupling, C. Ortner, L. Zhang, 2013, Accepted by Computer Methods in Applied Mechanics and Engineering
23.06.14, Volodya Temlyakov (University of South Carolina)
Bilinear and multilinear approximation of functions
We are interested in approximation of a multivariate function $f(x_1, \ldots, x_d)$ by linear combinations of products $u_1 (x_1 ) \cdots u_d(x_d )$ of univariate functions $u_j (x_j )$, $j = 1, \ldots , d$. In the case $d = 2$ it is a classical problem of bilinear approximation. In the case of approximation in the $L_2$ space the bilinear approximation problem is closely related to the problem of singular value decomposition (also called Schmidt expansion) of the corresponding integral operator with the kernel $f(x_1 , x_2 )$. We will discuss some known results on the rate of decay of errors of best bilinear approximation in $L_p$ under different smoothness assumptions on $f$. The problem of multilinear approximation in the case $d \geq 3$ is more difficult and much less studied than the bilinear approximation problem. We will present some known and recent results on best multilinear approximation in $L_p$ under mixed smoothness assumption on $f$.
16.06.14, Benjamin Doerr (Ecole Polytechnique, Paris)
The discrepancy of random points
In this talk, we discuss the star discrepancy of a random point set. We show that there are constants $k, K > 0$ such that for all $N, s \in \mathbb{N}$, $s \le N$, the point set consisting of $N$ points chosen uniformly at random in the $s$dimensional unit cube $[0,1]^s$ with probability at least $1  e^{ks}$ admits an axisparallel rectangle $[0,x] \subseteq [0,1]^s$ containing $K \sqrt{sN}$ points more than expected. Together with the famous upper bound by Heinrich, Novak, Wasilkowski, and Wo{'z}niakowski (2001), our result shows that the expected star discrepancy of a random point set is of order $\sqrt{s/N}$. Towards the end of the talk, we shall discuss to what extent these result also hold for spherical cap discrepancies.
26.05.14, Aviv Gibali (Fraunhofer IWTM, Kaiserslautern)
Projection methods  a powerful tool for realworld problems and combinatorial games
Projection methods are iterative algorithms that use projections onto sets while relying on the general principle that when a family of sets is present, then projections onto the given individual sets are easier to perform than projections onto other sets (intersections, image sets under some transformation, etc.) that are derived from the given individual sets.
Their robustness, low computational effort and their ability to handle hugesize problems make them very useful for many convex realworld problems such as Image Reconstruction (IR) and Intensity Modulated Radiation Therapy (IMRT) Treatment Planning. Over the past decade, some projection methods have proven very effective also in some nonconvex and discrete settings such as combi natorial games.
In this talk we describe several types of projection methods and illustrate their performances both for convex problems such as IR and IMRT, and also for nonconvex problems such as the 8 queens puzzle and Sudoku.
26.05.14, Jan Schwientek (Fraunhofer IWTM, Kaiserslautern)
Chebyshev approximation and semiinfinite programming
In applications one often seeks to approximate a given continuous func tion $F$ on a nonempty and compact set $Z \subset \mathbb R^n$ by a simpler function $f(\textbf p, \cdot)$ which can be chosen from a parametric family of continuous functions $\{f (\textbf p, \cdot)  \textbf p \in P \}$ with some parameter set $P \subset \mathbb R^m$. Depending on the application different norms may be used to measure the deviation between $F$ and $f(\textbf p, \cdot)$ on $Z$. If one wishes to minimize the maximal deviation, the Chebyshev norm has to be used. This leads to the nonsmooth problem of Chebyshev approximation: $CA \colon \min_{p \in P} \F (\cdot)  f (\textbf p, \cdot)\_{\infty,Z}$. Via the epigraph reformulation CA can be rewritten as a smooth standard semiinfinite programming (SIP) problem.
In engineering applications a modification of CA, termed reversed Chebyshev approximation, has received interest. There, given an approx imating family of functions $\{f(\textbf p, \cdot)  \textbf p \in P \}$ and a desired precision $e(p, q)$, the aim is to find parameter vectors $p \in P$ and $q \in Q$ such that the domain $Z(q)$ is as large as possible without exceeding the approximation error $e(p, q)$: $RCA : \quad \max_{(p,q) \in P \times Q} \text{Vol}[Z(q)] s.t. \F (\cdot)  f (p, \cdot)\_{\infty,Z(q)} \leq e(p, q)$. This problem can be reformulated as a semiinfinite programming prob lem, too, but it results in a general(ized) semiinfinite programming (GSIP) problem. In this talk we give an overview about semiinfinite programming (the oretical foundations, solution methods and further applications) and show how (reverse) Chebyshev approximation problems can be solved by that. Furthermore, we present a realworld application of Chebyshev approxi mation, namely the uniform coverage problem in spunbond production, and illustrate its solution as semiinfinite programming problem by a numerical example.
19.05.14, Christiane Helzel (HHU Düsseldorf)
Highorder WENO finite volume methods for Cartesian grids
Highorder WENO (i.e., weighted essentially nonoscillatory) methods are widely used for the approximation of hyperbolic partial differential equations. A common approach to use WENO methods on multidimensional Cartesian grids consists in applying a onedimensional WENO method in each direction. This spatial discretization is typically combined with a RungeKutta method in time, i.e., during each stage of a RungeKutta method onedimensional WENO schemes are used in a dimensionbydimension fashion.
However, it is known that finite volume WENO methods based on a dimensionbydimension approach retain the full order of accuracy (of the onedimensional method) for linear multidimensional problems, but they are only second order accurate for the approximation of nonlinear multidimensional problems.
In my talk, I will present a simple modification of finite volume WENO methods, which leads to the full spatial order of accuracy by using only onedimensional polynomial reconstructions in a dimensionbydimension approach. Furthermore, I will discuss the use of this method on adaptively refined grids as well as on Cartesian grids with embedded boundaries.
This is recent joint work with Pawel Buchmüller and Jürgen Dreher.
12.05.14, Hieu Nguyen (Edinburgh)
Combining domain decomposition and adaptivity: a study for additive Schwarz methods
Domain decomposition refers to the splitting of a PDE into coupled subproblems on smaller subdomains forming a partition of the original domain. It is the most popular approach to solve largescale problems on parallel supercomputers as the subproblems can be solved largely independently on different processors. When domain decomposition is combined with adaptivity, there are challenges in maintaining a good load balancing and mesh conformity among subdomains. These challenges can be overcome using the BankHolst parallel adaptive meshing paradigm. In this talk, we will formulate and analyze new additive Schwarz methods for this paradigm. Our methods take advantages of the local adaptive meshes of the whole domain which are fine in their associated subdomains but generally much coarser elsewhere. We will show that these methods are optimal, i.e, their rates of convergence do not depend on the mesh sizes or the number of subdomains. Numerical results will be provided to support our theoretical findings.
05.05.14, Aicke Hinrichs (Rostock)
Complexity of Banach space valued integration in the randomized setting
We study the complexity of Banach space valued integration problems in the randomized setting. We explain the interplay between the geometry of the underlying Banach space and the rate of the $n$th minimal errors. The relevant notion of Banach space geometry is the Rademacher type of the space. We also explain the motivation which comes from applications to the complexity of definite and indefinite integration and to the solution of ODEs. This is joint work with Stefan Heinrich.
28.04.14, Mario Ullrich (Jena)
On integration in Sobolev spaces using Frolov’s method
We present the method of Frolov for the integration of (periodic) functions in the Sobolev space $H^s_{\rm mix}$, $s\in\mathbb{N}$, of dominating mixed smoothness. This method is completely constructive and has the optimal order of convergence for functions with support (strictly) contained in the unit cube. A standard modification of the algorithm also leads to the optimal order for nonperiodic functions. Due to the simplicity of the method, we will present the whole proof.
03.04.14, Dinh Zung (Hanoi)
Sampling Recovery and Cubature on Sparse Grids Based on a Bspline QuasiInterpolation
Let $X_n = \{x_j\}^n_{j=1}$ be a set of n points in the $d$cube $[0, 1]^d$ , and $\Phi_n = \{\varphi_j \}^n_{j=1}$ a family of $n$ functions in the space $L_q([0, 1]^d)$, $0 < q \leq \infty$. We consider the approximate recovery in $L_q([0, 1]^d)$ of functions $f$ on $[0, 1]^d$ from the sampled values $f (x_1), \dots, f(x_n)$, by the linear sampling algorithm $L_n (X_n , \Phi_n , f ) := \sum_{j=1}^n f(x_j) \varphi_j$ The error of sampling recovery is measured in the norm of the space $L_q([0, 1]^d)$norm or the energy norm of the isotropic Sobolev space $W_q^{\gamma} ([0, 1]^d)$ for $0 < q \leq \infty$ and $\gamma > 0$. Functions $f$ to be recovered are from the unit ball in Besov type spaces of an anisotropic smoothness, in particular, spaces $B^a_{p,\theta}$ of a nonuniform mixed smoothness $a \in \mathbb R^d$ , and spaces $B^{\alpha, \beta}_{p,\theta}$ of a “hybrid” of mixed smoothness $\alpha > 0$ and isotropic smoothness $\beta \in \mathbb R$. We constructed optimal linear sampling algorithms $L_n(X_n^* , \Phi_n^* , \cdot)$ on special sparse grids $X_n^*$ and a family $\Phi_n^*$ of linear combinations of integer translated dilations of tensor products of Bsplines. We computed the asymptotic of the error of the optimal recovery. As consequences we obtained the asymptotic of optimal cubature formulas for integration of functions from from the unit ball of these Besov type spaces.
20.01.14, Jens Oettershagen (Bonn)
Integration of bivariate periodic functions with bounded mixed derivatives
We investigate integration in certain reproducing kernel Hilbert spaces of bivariate functions that possess bounded mixed derivatives up to order r. We compare worstcase errors of full tensorproducts of the rectangle rule, sparse grids, Hammersley QMC and the Fibonnacci lattice rule, with the latter being superior to all the other approaches.
03.02.14, Glenn Byrenheid (Bonn)
Optimal sampling of dvariate periodic functions with respect to energy sparse grids
Motivated by Yserentant’s results for the regularity of eigenfunctions of the electronic Schrödinger operator, it seems useful to study the approximation behavior of functions belonging to a space of dominating mixed smoothness. We are interested in finding good approximations of functions with this kind of regularity from finite dimensional subspaces, where the error is measured in the socalled energynorm $H^1$. We propose subspaces (energynorm sparse grids) and corresponding linear mappings which yield the optimal asymptotic error rate (approximation numbers) in terms of the number of degrees of freedom for periodic dvariate functions with mixed smoothness $s>1$. Moreover, the mentioned linear mappings only consider function evaluations of the function $f$ to be approximated. In particular, we obtain sharp bounds for sampling numbers in this context, which coincide with the corresponding estimates of the approximation numbers of the respective embedding. The rate is poynomial of order $s1$ without logarithmic perturbance which would occur for classical sparse grid methods. Our proofs rely on classical Bernstein and discrete Hardy type inequalities to obtain dyadic sampling representations, i.e., equivalent norms in the source space where only function values are questioned. The results are part of a joint work with W. Sickel (Jena), Dinh Dung (Hanoi) and Tino Ullrich (Bonn).
27.01.14, Karoline Köhler (HU Berlin)
Efficient and Reliable Error Control for the Obstacle Problem
This talk presents some results for the error analysis of the obstacle problem, which is the protoypical example of a variational inequality. The first part of the talk deals with a general reliable and efficient error estimate, independent of any specific finite element method. Therein, the efficiency is understood in terms of the error $uv$ in $H^1_0(\Omega)$ for the exact solution $u$ and an approximation $v$ in $H^1_0(\Omega)$ in the primal variable and the error $\lambda\mu$ for the exact Lagrange multiplier $\lambda$ and an approximation $\mu$ in the dual space $H^{1}(\Omega)$. This leads to the equivalence [uv+\lambda\mu_*\approx \text{computable terms}] possibly up to multiplicative generic constants. This general result is applied to the conforming Courant finite element method (FEM) in the second part of the talk, where the efficiency is analysed beyond the aforementioned equivalence and leads to [uv\approx \text{computable terms}.] Numerical experiments which demonstrate and highlight the results for various finite element methods conclude the talk.
13.01.14, Fabian Franzelin (Stuttgart)
Spatially adaptive sparse grid collocation for multivariate peridynamic simulations
Peridynamics is an accepted method in engineering for modeling crack propagation on a macroscopic scale. However, the sensitivity of the method to two important model parameters — elasticity $\alpha$ and the particle density—has not yet been studied. To make an efficient computation of sensitivity values possible, we used the adaptive sparse grid collocation method (ASGC). Motivated by the works of Silling and Owhadi we applied it to a simulation of a highspeed projectile impacting against a plate using peridynamics. We introduced the force of magnitude $K$ exerted by the indenter as one additional uncertain parameter. The scenario requires customtailored criteria for adaptive refinement to reduce the number of sampling points as well as possible. The results obtained by the stochastic collocation method using spatially adaptive sparse grids are evaluated and compared to previous methods in this context.
17.12.13, Philipp Morgenstern (Bonn)
Local refinement of regular quadrilateral meshes
Adaptive Finite Element Methods (AFEM) are commonly used for partial differential equations that require high resolution in small regions of the domain (due to local singularities such as reentering corners) while keeping a coarse mesh on other parts of the domain. This requires an algorithm for adaptive mesh refinement which preserves the shape of the elements as well as the regularity of the mesh. Moreover, theoretical purposes such as optimality of the convergence rates require an algorithm that allows for overlays and has linear memory complexity. We introduce a novel algorithm for the local refinement of regular quadrilateral meshes. This algorithm preserves the shape regularity of the initial mesh, allows the construction of overlays (this is, the coarsest common refinement of two meshes) and comes with linear complexity in terms of memory. The latter means that the number of new quadrilaterals in the fine mesh is essentially bounded by the number of atomic markings in the preceeding meshes.
03.12.13, Lev Markhasin (Stuttgart)
Discrepancy and integration in function spaces with dominating mixed smoothness
Optimal lower bounds for discrepancy in Besov spaces with dominating mixed smoothness are known from the work of Triebel. Hinrichs proved upper bounds in the plane. ChenSkriganov point sets are known to be optimal in the sense of L_{p}discrepancy, but they also satisfy the optimal upper bounds for discrepancy in Besov spaces with dominating mixed smoothness for certain parameters. Results for TriebelLizorkin and Sobolev spaces with dominating mixed smoothness and for the integration error can be concluded.
26.11.13, Sebastian Mayer (Bonn)
Dimensionality reduction in machine learning
In this talk, I survey the usage of (randomized) dimensionality reduction in machine learning, with special emphasis being put on JohnsonLindenstrauss embeddings (JLE). First, I outline a number of applications, including approximate nearest neighbor and classification tasks. There, dimensionality reduction has played a major role to make computations possible in feasible time. With that said, I delve into the theoretical foundations of JohnsonLindenstrauss embeddings. This will put us into the position to understand why randomized dimensionality reduction has been a success in several situations. But at the same time, it will also help us to understand why it has little chance to work in others.