Skip to main content

Research Group of Prof. Dr. Ira Neitzel

Research Seminar: Mathematics of Computation

Prof. Carsten Burstedde, Prof. Joscha Gedicke, Prof. Ira Neitzel and Prof. Barbara Verfürth

(previously with Prof. Markus Bachmayr, Prof. Michael Feischl, Dr. Tino Ullrich, and Prof. André Uschmajew)

Upcoming talks

Thursday, 16.05.2024 at 4pm (c.t.) in room 2.035, FHA7, Asha K. Dond (Indian Institute of Science Education and Research Thiruvananthapuram 695551, India)

Quasi-optimality of Adaptive Nonconforming FEM for Distributed Elliptic Optimal Control Problems

In this talk, we will discuss the quasi-optimality of adaptive nonconforming finite element methods for distributed optimal control problems governed by m-harmonic operators for m = 1, 2. A variational discretization approach is employed and the state and adjoint variables are discretized using nonconforming Crouzeix-Raviart and Morley finite elements. The general axiomatic framework that includes stability, reduction, discrete reliability, and quasi-orthogonality establishes the quasi-optimality. Numerical results demonstrate the theoretically predicted orders of convergence.

Past Talks

Wednesday, 17.01.2024 at 4pm (s.t.) in room 2.035, FHA7, Jakub Šístek (Institute of Mathematics of the Czech Academy of Sciences, Prague)

Towards a scalable domain decomposition solver for immersed boundary finite element method

Immersed boundary finite element method (FEM) presents an attractive approach to simulations avoiding the generation of large body-fitted meshes. This can be tedious and challenging for complex geometries as well as for very fine adaptive meshes distributed over a parallel supercomputer. However, the price to pay are more complicated formulations for the weak enforcement of Dirichlet boundary conditions, poor conditioning of stiffness matrices, and nonstandard numerical integration at the boundary. We develop multilevel balancing domain decomposition based on constraints (BDDC) method tailored to the solution of the linear systems arising in the context of immersed boundary FEM with parallel adaptive grid refinement using Z-curves (based on the p4est library). One crucial challenge is presented by load balancing the solver since the elements cut by the domain boundary require much more time for integration, whereas these elements have the same cost during the solution of the linear system as those fully inside the domain. Another challenge is presented by fragmenting of subdomains, which has two sources: i) the partitioning strategy based on space-filling curves, and ii) extraction of the elements contributing to the stiffness matrix. We present these concepts, the challenges, our implementation, and numerical results for large-scale Poisson and linear elasticity problems on complex geometries from engineering. This is joint work with Fehmi Cirak, Eky Febrianto, Pavel Kůs, and Josef Musil.

Wednesday, 22.11.2023 16.00 (s.t.), 2.041, Moritz Hauck (Chalmers, Gothenburg)

A Super-Localized Generalized Finite Element Method

In this talk, we present a multiscale method for elliptic PDEs with arbitrarily rough coefficients. The method constructs operator-adapted solution spaces with uniform algebraic approximation rates by combining techniques from numerical homogenization and partition of unity methods. Localized basis functions with the same super-exponential localization properties as Super-Localized Orthogonal Decomposition (SLOD) allow for an efficient implementation of the method. We derive higher-order versions of the method and demonstrate its application to high-contrast channeled coefficients. We also present an extension of the method to heterogeneous Helmholtz problems.

Monday, 10.07.2023 11.00 (s.t.), FHA7 2.041, José Carlos Garay Fernández (Augsburg)

DD-LOD: A Localized Orthogonal Decomposition Method for Elliptic Problems with Rough Coefficients Based on Domain Decomposition Techniques

The solution of multi-scale elliptic problems with non-separable scales and high contrast in the coefficients by standard Finite Element Methods (FEM) is typically prohibitively expensive since it requires the resolution of all characteristic lengths to produce an accurate solution. Numerical homogenization methods such as Localized Orthogonal Decomposition (LOD) methods provide access to feasible and reliable simulations of such multi-scale problems. These methods are based on the idea of a generalized finite element space whose basis functions are obtained by modifying standard coarse FEM basis functions to incorporate relevant microscopic information in a computationally feasible procedure. Using this enhanced basis one can solve a much smaller problem to produce an approximate solution whose accuracy is comparable to the solution obtained by the expensive standard FEM. We present a variant of the LOD method that utilizes domain decomposition techniques and its application in the solution of elliptic partial differential equations with rough coefficients as well as elliptic optimal control problems with rough coefficients with and without control constraints.

Thursday, 23.05.2019 13.00 (s.t), EA19b 2.035, Alexander Zimmerman (RWTH Aachen)

Automatic implementation and verification of accurate and efficient finite element simulations using UFL and Firedrake

Often when developing simulations of physical systems, application researchers spend much of their time writing and debugging code which implements long-established general methods. Rather than re-inventing the wheel, many researchers learn to use BLAS for matrix multiplication, LAPACK for dense linear algebra, and PETSc for solving sparse linear and nonlinear systems often based on unstructured geometric meshes. These software projects have been tremendously successful. Many other important methods in computational science and engineering have yet to be made so accessible.

This talk explores a ubiquitous technology which is not sufficiently accessible: finite element methods (FEM). Many good FEM libraries (e.g. deal.II, Dune, Moose, FEniCS, Firedrake) exist; but they all fill specific niches, provide specific features, and require substantial user expertise. Underpinning FEniCS and Firedrake is perhaps the most promising finite element software technology, the Unified Form Language (UFL). The key idea of UFL is that, by fully exploiting the mathematical rigor of FEM, a concise and abstract domain specific language can be provided to users, and a library can automatically write and compile the optimized low-level code. I will demonstrate UFL by applying Firedrake to the multi-physics problem of convection-coupled phase-change. Furthermore, I will show how I used UFL to automate the previously daunting task of formally verifying FEM codes via the method of manufactured solutions.

Thursday, 13.12.2018 14.30 (s.t), EA19b 2.041, Herman Haverkort (Bonn)

Space-filling curve properties for mesh traversals

Space-filling curves based on recursive tessellations can be used to define the order in which to process elements of computational meshes, the order in which to lay-out mesh elements in memory, or the order in which to distribute them over different processors: we simply use the order in which the mesh elements appear along the curve. In this talk I will present definitions of properties of space-filling curves that may influence parameters such as the amount of inter-processor communication or the number of page faults incurred in accessing mesh elements. In particular:

  • Stack-and-stream methods can be very efficient with memory access through caches, but they require space-filling curves that have specific symmetry properties (palindromicity). I will explain what we know about the existence of such curves in 2D and 3D, for cubic and tetrahedral tessellations, and what open questions remain.

  • In various ways, memory access and communication efficiency may benefit from curves of which every subsection (i.e. set of points that are consecutive along the curve) is relatively well-shaped, for example having small diameter and surface area compared to its volume. Not all space-filling curves are equally good in this respect. I will explain what we know about what can be achieved (lower and upper bounds) according to various quality measures.

Thursday, 29.11.2018 15.00 (s.t), EA19b 2.035/2.041, Dr. Markus Faustmann (TU Wien) (Joint work with Jens Markus Melenk and Dirk Praetorius)

Optimal adaptivity for the fractional Laplacian

We study the operator (Δ)s(-\Delta)^s for s(0,1)s \in (0,1), the so called (integral) fractional Laplacian, defined as the principal value integral (Δ)su(x):=C(d,s)  P.V.Rdu(x)u(y)xyd+2sdy. (-\Delta)^su(x) := C(d,s) \; \text{P.V.} \int_{\mathbb{R}^d}\frac{u(x)-u(y)}{|x-y|^{d+2s}}dy. In this talk, we present novel inverse estimates for the fractional Laplacian. More precisely, we show that a weighted L2L^2-norm, where the weight is a power of the local mesh-width, of the fractional Laplacian can be bounded by the energy norm. Generalizing the arguments used in the boundary element method, [1], the non-local integral-operator is split into a localized near-field and a smoother far-field part, which is treated using the so-called Caffarelli-Silvestre extension problem and interior regularity estimates.

Weighted L2L^2-norms appear naturally in the context of a-posteriori error estimation in adaptive methods. With the help of our inverse estimate, we prove optimal convergence rates of an adaptive algorithm steered by a weighted residual error estimator using the axiomatic approach of [2]. Moreover, we propose a different, reliable weighted error estimator to cover the open case of fractional powers larger than 3/43/4.


[1] Aurada, M., Feischl, M., Führer, T., Karkulik, M, Melenk, J.M., and Praetorius, D. Local inverse estimates for non-local boundary integral operators. Mathematics of Computation, 86(308):2651-2686, 2017.

[2] Carstensen, C., Feischl, M., Page, M. and Praetorius, D. Axioms of adaptivity. Computers & Mathematics with Applications, 67(6):1195-1253, 2014.

Thursday, 25.10.2018 13:30 (s.t), EA19b 2.035, Paula Castro (MODEMAT, Escuela Politécnica Nacional, Quito) (Joint work with Juan Carlos De los Reyes, Escuela Politécnica Nacional, Quito)

A bilevel learning approach for optimal observation placement in variational data assimilation

In this paper we propose a bilevel optimization approach for the placement of observations in variational data assimilation problems. Within the frame- work of supervised learning, we consider a bilevel problem where the lower level task is the variational reconstruction of the initial condition of the system, and the upper level problem solves the optimal placement with help of a sparsity inducing norm. Due to the pointwise nature of the observations, an optimality system with regular Borel measures on the right-hand side is obtained as necessary and sufficient optimality condition for the lower level problem. The latter is then considered as constraint for the upper level instance, yielding an optimization problem constrained by time-dependent PDE’s with measures. After proving some extra regularity results, we demonstrate the existence of Lagrange multipliers and derive a necessary optimality system characterizing the optimal solution of the bilevel problem. The numerical solution is carried out also on two levels. The lower level problem is solved using a standard BFGS method, while the upper level one is solved by means of a projected BFGS algorithm based on the estimation of ε-active sets. A penalty function is also considered for enhancing sparsity of the location weights. Finally some numerical experiments are presented to illustrate the main features of our approach.

Thursday, 13.09.2018 14:00 (s.t), EA19b 2.035, Johannes Pfefferer (TU München)

hp-Finite Elements for Fractional Diffusion

Thursday, 21.06.2018 13:00 (s.t), We6 6.020, Kathrin Smetana (Universiteit Twente)

Randomized model order reduction

During the last decades (numerical) simulations based on partial differential equations have considerably gained importance in engineering applications, life sciences, environmental issues, and finance. However, especially when multiple simulation requests or a real-time simulation response are desired, standard methods such as finite elements are prohibitive. Model order reduction approaches have been developed to tackle such situations. Here, the key concept is to prepare a problem-adapted low-dimensional subspace of the high-dimensional discretization space in a possibly expensive offline stage to realize a fast simulation response by Galerkin projection on that low-dimensional space in the subsequent online stage.

In this talk we show how randomization as used say in randomized linear algebra or compressed sensing can be exploited both for constructing reduced order models and deriving bounds for the approximation error. We also demonstrate those techniques for the generation of local reduced approximation spaces that can be used within domain decomposition or multiscale methods.

Thursday, 26.04.2018 13:00 (s.t), We6 6.020, Matthias Schlottbom (Universiteit Twente) (Joint work with Herbert Egger, TU Darmstadt)

A perfectly matched layer approach for radiative transfer in highly scattering regimes

We consider the numerical approximation of boundary conditions in radiative transfer problems by a perfectly matched layer approach. The main idea is to extend the computational domain by an absorbing layer and to use an appropriate reflection boundary condition at the boundary of the extended domain. A careful analysis shows that the consistency error introduced by this approach can be made arbitrarily small by increasing the size of the extension domain or the magnitude of the artificial absorption in the surrounding layer. A particular choice of the reflection boundary condition allows us to circumvent the half-space integrals that arise in the variational treatment of the original vacuum boundary conditions and which destroy the sparse coupling observed in numerical approximation schemes based on truncated spherical harmonics expansions. A combination of the perfectly matched layer approach with a mixed variational formulation and a PN-finite element approximation leads to discretization schemes with optimal sparsity pattern and provable quasi-optimal convergence properties. As demonstrated in numerical tests these methods are accurate and very efficient for radiative transfer in the scattering regime.

Thursday, 07.12.2107 13:00 (s.t), We6 6.020, Kristin Kirchner (Chalmers)

Variational Methods for Moments of Solutions to Stochastic Differential Equations

We consider parabolic stochastic partial differential equations with multiplicative Wiener noise. For the second moment of the mild solution, a deterministic space-time variational problem is derived. Well-posedness is proven on projective and injective tensor product spaces as trial and test spaces. From these results, a deterministic equation for the covariance function is deduced. These deterministic equations in variational form are used to derive numerical methods for approximating the second moment of solutions to stochastic ordinary and partial differential equations. For the canonical example of a stochastic ordinary differential equation with multiplicative noise, the geometric Brownian motion, we introduce and analyze Petrov-Galerkin discretizations based on tensor product piecewise polynomials. For approximating the second moment of solutions to stochastic partial differential equations, we then propose conforming space-time Petrov-Galerkin discretizations. In both cases, we derive stability bounds in the natural tensor product spaces. Numerical experiments validate the theoretical results.

Thursday, 14.12.2107 10:15 (s.t), We6 6.020, Peter Oswald (Bonn) (Joint work with M. Griebel, Bonn)

Schwarz iterative methods and RKHS theory

Schwarz iterative methods in Hilbert spaces are a recurring theme in my collaboration with M. Griebel. Lately, we became interested in greedy and stochastic versions of this class of iterative methods, where a variational problem in an infinite-dimensional Hilbert space HH is decomposed into infinitely many auxiliary subproblems. Convergence of a particular instance of the Schwarz iteration (called relaxed greedy and averaged stochastic descent, respectively) can be proved, algebraic rates of convergence can be obtained for large subsets of HH (this is in contrast to finite decompositions, where rates are exponential in the number of iteration steps). When working on the stochastic versions, we realized a certain connection to the theory of reproducing kernel Hilbert spaces and online learning algorithms. It turns out that the iterative solution of variational problems in a Hilbert space via subproblem solves based on space splittings can always be viewed as online learning of a variable Hilbert space valued function using kernel methods. Even though this connection does not help perfoming the algorithm, it sheds new light on the convergence proofs. The introduction of RKHS of variable Hilbert space valued functions may have some value for statistical learning as well, this hope is, however, not yet substantiated.

Thursday, 30.11.2017 13:00 (s.t), We6 6.020, Virginie Ehrlacher (École des Ponts ParisTech) (joint work with Tony Lelièvre and Pierre Monmarché)

Tensorized ABF method for molecular dynamics

The Adaptive Biasing Force (ABF) method is one of the most popular method in molecular dynamics simulation to avoid problems linked to the metastability of Langevin processes used to compute macroscopic thermodynamic quantities. The method relies in the use of coarse-grained descriptors of the atomic system, called reaction coordinates, and in the computation of an appropriate function depending on these reaction coordinates, called free energy. However, when the numner of relevant reaction coordinates is large, the standard ABF method suffers from the well-known curse of dimensionality. The aim of this talk is to present theoretical results on the convergence of a tensor method to circumvent this difficulty and enable the use of ABF methods in cases when the number of reaction coordinates is significant.

Thursday, 23.11.2017 13:00 (s.t), We6 6.020, Felix Voigtlaender (TU Berlin)

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

Recently, machine learning techniques based on deep neural networks have significantly improved the state of the art in tasks like image classification and speech recognition. Nevertheless, a solid theoretical explanation of this success story is still missing. In this talk, we will present recent results concerning the approximation theoretic properties of deep neural networks which help to explain some of the characteristics of such networks; in particular we will see that deeper networks can approximate certain classification functions much more efficiently than shallow networks. We emphasize though that these approximation theoretic properties do not explain why simple algorithms like stochastic gradient descent work so well in practice, or why deep neural networks tend to generalize so well; we purely focus on the expressive power of such networks. Precisely, as a model class for classifier functions we consider the class of (possibly discontinuous) piecewise smooth functions for which the different “smooth regions” are separated by smooth hypersurfaces. Given such a function, and a desired approximation accuracy, we construct a neural network which achieves the desired approximation accuracy, where the error is measured in L2L^2. We give precise bounds on the required size (in terms of the number of weights) and depth of the network, depending on the approximation accuracy, on the smoothness parameters of the given function, and on the dimension of its domain of definition. Finally, we show that this size of the networks is optimal, and that networks of smaller depth would need significantly more weights than the deep networks that we construct, in order to achieve the desired approximation accuracy.

Thursday, 16.11.2017 10:15 (s.t), We6 6.020, Thomas Hangelbroek

Nonlinear and anisotropic approximation with Gaussians

For approximation with most positive definite kernels – perhaps most obviously for Gaussian radial basis functions – the selection of a scale (by tuning of shape parameters) is challenging, yet exigent in the following sense: for constant scale approximation, high approximation rates are possible, but at the cost of conditioning, while for stationary scaling (fixing the scale to fit a grid) the approximation problem can be made very stable (e.g., with interpolation matrices nearing the identity) but approximation power is lost. For this reason, multi-scale strategies, which use kernels at different dilation levels, have been considered. But this aspect of kernel approximation is relatively new, and its theoretical understanding lags far behind traditional techniques multi-scale tools like wavelets and tensor product splines. In this talk we discuss a pair of nonlinear, multi-scale Gaussian approximation problems. The first, considered with Amos Ron, features approximation by Gaussians at multiple, spatially varying scales, and provides correct rates for functions in standard smoothness spaces for nonlinear approxiation. The second, recently considered with Amos Ron and Wolfgang Erb, treats NN-term Gaussian approximation of cartoon class functions with rates comparable to those of curvelets and shearlets.

Thursday, 09.11.2017 13:00 (s.t), We6 6.020, Bastian Bohn (Universität Bonn)

On the best approximation error of energy sparse grids in H1H^1

In this talk, we present a specific variant of a sparse grid, which turns out to be optimal for H1H^1-approximation of functions from the Sobolev space of bounded mixed smoothness Hmix,02([0,1]d)H^2_{\text{mix},0}([0,1]^d) which vanish on the boundary. We also discuss tractability and relations to other results in similar settings.

Thursday, 26.10.2017 13:00 (s.t), We6 6.020, Albert Cohen (Université Pierre et Marie Curie, Paris)

Data assimilation, parameter estimation and reduced modeling

This talk will address some recent mathematical advances on two problems which occur in parameter estimation in PDEs from observed data. The first one concerns the identifiability of the scalar non-constant diffusion coefficient aa in a second order elliptic PDE from the observation of the full solution uu for a given right hand side ff. Our main results show that for strictly positive right hand side, and aa belonging in certain smoothness classes, identification is possible with Hölder dependence of aa on uu, and that Lipschitz dependence does not generally hold. The second problem concerns the recovery of the full solution uu from a finite number of linear measurements representing the observed data. Motivated by reduced modeling, the aa-priori additional information about uu is in the form of how well it can be approximated by a certain known subspace of given dimension (reduced bases, POD). Algorithms that yield near optimal recovery bounds are proposed. These results were obtained in collaborations with Peter Binev, Andrea Bonito, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, Gerrit Welper and Przemyslaw Wojtaszczyk.

16.02.2017 13:00 (s.t), We6 6.020, Florian Kruse (Graz)

Optimal control of semilinear parabolic equations by BV-functions

This talk addresses parabolic optimal control problems in which the objective promotes controls that are piecewise constant in time. That is, optimal controls for these problems have a particularly simple structure. To achieve this we consider controls that are BV-functions in time and use the total variation norm of their derivatives in the objective. This implies that the objective is nonsmooth. We prove existence of optimal controls, derive first- and second-order optimality conditions, provide error estimates, and develop a semismooth Newton method for their solution. Numerical examples are presented as well.

09.02.2017 13:00 (s.t), We6 6.020, David Krieg (Jena)

On the approximation of tensor product operators

Let TT be a bounded linear operator between two Hilbert spaces. We want to approximate TT by an algorithm that evaluates less than a given number nn of linear functionals. The minimal worst case error of such an algorithm is given by the nnth approximation number of TT. If TT is a tensor product operator, this is the nnth largest number in the tensor product of the sequences of approximation numbers of its factors. I will talk about the asymptotic and preasymptotic behavior of tensor products of sequences of polynomial decay. The results will be applied to the L2L_2-approximation of mixed order Sobolev functions on the dd-cube. It turns out that this problem is much harder for nonperiodic functions than for periodic functions, if n2dn\leq 2^d. Asymptotically, however, there is no difference at all. This investigation is inspired by [1] and can be found in [2].

[1] T. Kühn, W. Sickel, T. Ullrich: Approximation of mixed order Sobolev functions on the d-torus – asymptotics, preasymptotics and d-dependence. Constructive Approximation 42, 353–398, 2015.

[2] K.: Tensor power sequences and the approximation of tensor product operators. ArXiv e-prints, 2016. arXiv:1612.07680 [math.NA]

Presentation slides (pdf)

02.02.2017 13:00 (s.t), We6 6.020, Joachim Rehberg (WIAS Berlin)

On optimal elliptic Sobolev regularity

In recent years it was anticipated by the community, in particular in optimal control, that real world problems are ‘non-smooth’, i.e. the resulting elliptic/parabolic equations have to be considered on non-smooth domains, often exhibit discontinuous coefficients or/and are to be complemented by mixed boundary conditions. That leads, in this generality, to the failure of many of the classical elliptic regularity results in these cases. The lecture shows up substitutes for these in many of the above outlined cases – and these are sharp.

19.01.2017 13:00 (s.t), We6 6.020, Glenn Byrenheid (Bonn)

Non-linear approximation with respect to the Faber-Schauder dictionary

We apply the theory of unconditional Faber-Schauder characterizations of non-periodic function spaces provided in [1] and extend it to Sobolev spaces SprW([0,1]d)S^r_{p}W([0,1]^d) with dominating mixed smoothness. These characterizations yield intrinsic discretizations of SprW([0,1]d)S^r_{p}W([0,1]^d) and Besov spaces Sp,θrB([0,1]d)S^r_{p,\theta}B([0,1]^d) that allow us to study best mm-term approximation with respect to the Faber-Schauder dictionary where the error is measured in Lq([0,1]d)L_q([0,1]^d). Compared to linear (sampling) approximation in case p<q p < q the exact values of pp and qq do not affect the asymptotic rate. We obtain always σm(SprW([0,1]d),F)Lq([0,1]d)mr(logm)(d1)(r+12),\sigma_m(S^r_pW([0,1]^d),\mathbb{F})_{L_q([0,1]^d)}\lesssim m^{-r} (\log m)^{(d-1)(r+\frac{1}{2})}, especially in the case q=q=\infty . For mixed Besov spaces Sp,θrB([0,1]d)S^r_{p,\theta}B([0,1]^d) with θ<1\theta<1 we prove in the smoothness range 1p<r<min{1θ1,2}\frac{1}{p} < r < \min\{\frac{1}{\theta}-1,2\} the purely polynomial rate σm(Sp,θrB([0,1]d),F)Lq([0,1]d)mr.\sigma_m(S^r_{p,\theta}B([0,1]^d),\mathbb{F})_{L_q([0,1]^d)}\lesssim m^{-r}. Note all our results are obtained by constructive algorithms using coefficients that are generated by a finite number of function evaluations. This is joint work with Tino Ullrich.

[1] Triebel, Hans . Bases in function spaces, sampling, discrepancy, numerical integration. EMS Tracts in Mathematics, 11. European Mathematical Society (EMS), Zürich, 2010. x+296~pp. ISBN: 978-3-03719-085-2

12.01.2017 13:00 (s.t), We6 6.020, Constantin Diez (Adam Opel AG, Rüsselsheim)

Knowledge Extraction from Crash Simulations

In order to understand the crash behavior of cars before expensive tests take place, virtual robustness investigations get more important in the development process. The evaluation is most commonly done by watching solely system responses, such as intrusion or dummy criteria, with simple statistical measurements such as mean or variance.

In order to extract more information in an even more compressed way, machine learning methods can be used to help engineers understand their simulations much faster and also give hints for improvement. Therefore a dimension reduction scheme was specially designed for crash simulations and enables cheap storage of results, as well as fast comparison. Since this reduction enables the computation of simulation similarity, clustering methods in combination with low-dimensional space embedding can be used to find groups of simulations with similar buckling behavior. Finally to avoid or reproduce certain deformation patterns, solutions in terms of variable ranges can be computed with decision tree learning, so that an engineer knows in which areas he may choose a design.

15.12.2016 13:00 (s.t), We6 6.020, Christian Pfeiffer (Bonn)

Space-Time discontinuous Galerkin methods for parabolic optimal control problems

This presentation applies the space-time DG-Method of Neumueller to parabolic optimal control problems. The talk includes theoretical results (error estimates for distributed and boundary control problems) as well as numerical experiments.

08.12.2016 13:00 (s.t), We6 6.020, Veronika Karl (Würzburg)

An Augmented Lagrange Method for elliptic state constrained optimal control problems

We want to solve a convex optimal control problem with an elliptic state equation and a pointwise state constraint. Because of the low regularity of the Lagrange Multipliers, pointwise state constraints cause several difficulties in their numerical treatments and are therefore still challenging. In this talk we want to apply an adapted Augmented Lagrange Method in order to eliminate the explicit given state constraints. Using a special update rule our algorithm is not only yielding strong convergence of the solution as well as weak convergence of the adjoint states but also boundedness of the multiplier associated to the state constraint and its weak-* convergence.

01.12.2016 13:00 (s.t), We6 6.020, Bart Vandereycken (Geneva)

Robust methods for computing extremal points of real pseudospectra

We introduce a criss-cross type algorithm to compute the rightmost point in the real pseudospectrum of a given matrix, that is, the set of eigenvalues of all real, norm-bounded, additive perturbations of this matrix. Existing methods, which are based on eigenvalue optimization over the rank-2 matrix manifold using steepest descending schemes, may converge to only locally rightmost solutions and their convergence can be slow. To avoid these difficulties, we formally extend the criss-cross algorithm originally developed for unstructured pseudospectra and to reduce its computational cost, we exploit a supersets characterisation of the real pseudospectrum. Each criss and cross search is performed on carefully selected supersets, and involves solving only linear eigenvalue problems and singular value optimization problems. Furthermore, we also propose a subspace projection framework, which combines the criss-cross algorithm with subspace projection techniques to make the algorithm applicable to large-scale matrices. The developed algorithms are proven to be globally convergent and their robustness and efficiency are demonstrated by a number of numerical examples.

Joint work with Ding Lu, University of Geneva.

17.11.2016 13:00 (s.t), We6 6.020, Lucas Bonifacius (TU München)

Strong stability of linear parabolic time-optimal control problems

The goal of this talk is to derive qualified optimality conditions for a class of linear time-optimal control problems with general convex target set. To this end, we state sufficient conditions for strong stability that in turn guarantees the existence of Lagrange multipliers for this state constrained optimal control problem. The theory is based on a characterization of weak invariance of the target set under the controlled equation. An appropriate strengthening of the resulting Hamiltonian condition guarantees strong stability and yields a priori bounds on the size of multipliers, independent of, e.g., the initial point, the running cost or the a priori unknown optimal solution.

03.11.2016 13:00 (s.t), We6 6.020, Constantin Weiser (Düsseldorf)

Numerical Quadrature in Econometrics – Two Examples

Integration is the key operator in stochastic calculus. It‘s used whenever a probability for a continuous random variable has to be computed. In practical applications, these integrals are very often multi-dimensional and no closed form solution exist. Therefore they have to be approximated numerically. Due to historical developments, Monte-Carlo methods has extended its dominance over other numerical methods like numerical quadrature.

We present two highly relevant econometric models, where we use numerical quadrature for computing integrals within an optimization process: the probit-model and the random coefficient logit model. For the first we propose a highly customized sparse grid approach for which we find an impressive improvement in terms of efficiency (accuracy & computational cost) relative to the state of the art quasi Monte Carlo methods. The second model suffers from a strong interrelationship between the integrals. Therefore, an adaptive quadrature algorithm leads to high computational costs and the standard sparse-grids approach does not work better than the standard Monte Carlo approach.

We find in general the need for customization of quadrature methods to be competitive with Monte-Carlo methods.

20.10.2016 13:00 (s.t), We6 6.020, Markus Bachmayr (Bonn)

Representations of Gaussian random fields and sparse Hermite approximation of lognormal diffusion problems

We consider the convergence of sparse Hermite expansions of solutions of elliptic PDEs with lognormally distributed diffusion coefficients. An important ingredient is the choice of an expansion of the Gaussian random field in terms of independent scalar Gaussian random variables, which can be interpreted as a choice of stochastic coordinates. Here it turns out that multilevel-type expansions can lead to better results than Karhunen-Loève decompositions. We establish convergence rates and moreover consider the construction of such multilevel expansions for the important case of Matérn covariances. This talk is based on joint works with Albert Cohen, Ronald DeVore and Giovanni Migliorati.

14.07.16 13:00 (s.t), We6 6.020, Jens Henrik Göbbert (FZ Julich)

Lowering the Barriers to In-Situ Visualization

Extracting scientific insight from large simulations is of crucial importance for science. Scientific advances are made only once the data produced by the simulations is processed into a meaningful analysis. But as simulations increase in size for more detail, post-processing is becoming the bottleneck on the way to the desired scientific insight. In situ techniques, like in situ visualization, are known to play an important part in tackling this problem. Sparing some supercomputing time to process, structure, reduce and visualize the data in real-time during the simulation offers several benefits. In particular, when data reduction becomes inevitable, only during the simulation all relevant data about the simulated fields and any embedded geometry is readily available at the highest resolution and fidelity for critical decision making. But even though in situ visualization has been successfully deployed over more than two decades, its use still has not gone “mainstream”. Scientists today often have to abstain from meaningful in situ visualizations of their simulation data. This talk will give an introduction to in situ visualization and discuss state-of-the-art implementation-, optimization- and coupling techniques to integrate the needed functionality to each simulation on the basis of VisIt/Libsim and ParaView/Catalyst.

21.07.16 13:00 (s.t), We6 6.020, Bashar Ibrahim

Mathematical Modeling and Simulation of DNA segregation

Correct DNA segregation is a fundamental process that ensures the faithful inheritance of genomic information for the propagation of cell life. Segregation failures underlie many human health problems, most notably aneuploidy and cancer. DNA segregation, is inherently complex and highly cross linked. It cannot be understood from reflections on the individual components (proteins, complexes etc) alone, but should be understood through considerations involving many components at the same time. The current experimental techniques are not sufficient to make quantitative predictions. Thence, the integration of experimental and mathematical approaches is employed for understanding of biological systems. In this talk, examples will be presented and mathematical challenges will be discussed. First challenge is how can we distinguish between active transport and diffusion of PDEs of cellular processes given various data like life-cell imaging. Additionally, how can we make whole-cell computational model for DNA segregation.

30.06.16 13:00 (s.t), We6 6.020, Friederike Hellwig (Humboldt-Universität zu Berlin)

Low-Order Discontinuous Petrov-Galerkin Finite Element Methods for Linear Elasticity

Since the design of pointwise symmetric stress approximations in H(div, Ω; S) for linear elasticity is cumbersome, especially in 3D, the discontinuous Petrov-Galerkin methodology promises a low-order symmetric stress approximation. In this talk, we use the ultraweak formulation to introduce three new methods with piecewise constant ansatz functions for the displacement and the stress variable and piecewise affine and continuous interface displacements and piecewise constant interface tractions. The three methods for linear elasticity differ from each other in the choice of norms and the occurence of some constraint. The minimal test space of piecewise (and, in general, discontinuous) symmetric parts of lowest-order Raviart-Thomas functions and piecewise affine functions allow for a direct proof of the discrete inf-sup condition and a complete a priori and a posteriori error analysis which is robust in the incompressible limit as λ → ∞. Numerical experiments with uniform and adaptive mesh-refinings investigate λ-robustness and confirm that the third scheme is locking-free.

23.06.16 13:00 (s.t.), We6 6.020, Reinhold Schneider (TU Berlin)

Hierachical Johnson-Lindenstrauß transform as a randomized hierarchical singular value decomposition

Hierarchical Tucker tensor format and Tensor Trains (TT) have been introduced to offer stable and robust approximation by a low-order cost. The hierarchical singular value decomposition (HSVD) introduced independently by Hackbusch & Kühn and Oseledets and analyzed by Grasedyck, provides a reliable tool for quasi best tensor product approximation. It can be used in hard/soft thresholding iterations (Bachmayr et al.) providing guaranteed convergence. To reduce the operational cost of of the SVDs, we consider randomized SVD in the sense of Tropp et al. which is based on Johnson-Lindenstrauß transform. We introduce a randomized HSVD computation by extending these ideas to hierarchical tensors.

09.06.16 13:00 (s.t), We6 6.020, Peter Oswald (Bonn)

Subspace Correction Methods: Randomization and Acceleration

Continuing on a topic which I already reported on in 2014 in the same seminar, I will survey some more recent developments in the theory of subspace correction methods. The main issues are new randomization strategies (order of subspace access and grouping of subspaces) and acceleration (similar to what we gain if we go from gradient to conjugate gradient methods). I will spend some time on an interesting problem connecting quantitative estimates for matrix paving (the recently proved Anderson Paving Conjecture) with choosing the best equation ordering for the SOR (Gauss-Seidel) iteration.

02.06.16 13:00 (s.t), We6 6.020, Thomas Wick (RICAM)

Goal function evaluations and adaptive finite elements for phase-field fracture problems

Currently, fracture propagation and multiphysics problems are major topics in applied mathematics and engineering. It seems to turn out that one of the most promising methods is based on a variational setting (proposed by Francfort and Marigo) and more specifically on a thermodynamically con- sistent phase-field model. Here a smoothed indicator function determines the crack location and is characterized through a model regularization pa- rameter. In addition, modeling assumes that the fracture can never heal, which is imposed through a temporal constraint, leading to a variational inequality system. In this talk, this basic fracture model is augmented with several hints and discussions of serious challenges in developing state-of-the-art numeri- cal methods. Key aspects are robust and efficient algorithms for imposing the previously mentioned crack irreversibility constraint, treatment of the indefinite Jacobian matrix in terms of a monolithic error-oriented Newton approach, computational analysis of the interplay of model and discretiza- tion parameters, and finally coupling to other multiphysics problems such as fluid-filled fractures in porous media, fluid-structure interaction, and steps towards high performance computing for tackling practical field problems. Our focus in these developments will be on goal functional evaluations, i.e., certain quantities of interest. This aim is accomplished with adap- tive finite elements and more specifically, on the one hand, with predictor- corrector mesh adaptivity and, on the other hand, using partition-of-unity dual-weighted residual a posteriori error estimation. Together with high per- formance computing, mesh adaptivity allows for systematic investigations of the previously mentioned problem settings.

19.05.16 13:00 (s.t), We6 6.020, Paul Sinz

Applications of a Spectral Problem: Maximum Energy Penetration Inside Composite Structures and MS-GFEM

A systematic method for identifying the worst case load amongst all boundary loads of a fixed energy is introduced. Here the worst case load delivers the largest fraction of input energy into a prescribed subdomain of interest. This leads to an eigenvalue problem, for which the largest eigenvalue is the maximum fraction of energy which can penetrate the subdomain. The associated eigenfunctions are the worst case solutions. The properties of these eigenfunctions motivates a particular Generalized Finite Element Method (GFEM) called the Multiscale Spectral GFEM (MS-GFEM), developed by Babuˇska and Lipton (2011). The MS-GFEM method and some computations are discussed.

19.05.16 14:00 (s.t), We6 6.020, Gabriel Barrenechea

Stabilised finite element methods in anisotropic quadrilateral meshes

In this talk I will review some recent results on the stabilisation of finite element methods in anisotropic quadrilateral meshes. Our first attempt is the family Qk+1×Pk1Q_{k+1}\times P_{k-1}. This pair is inf-sup stable, but their stability constant depends on the aspect ratio of the triangulation. Then, we identify the minimal amount of pressure modes which are responsible for this behaviour, and we propose a method that penalises them in such a way that the resulting scheme is stable independently of the aspect ratio. Next, we move to the, optimal and not inf-sup stable, Q1×P0Q_1\times P_0 pair, and the Oseen equation.

28.04.16 13:00 (s.t), We6 6.020, Christopher Kacwin (Bonn)

On orthogonality of the Chebychev-Frolov lattice and applications

As a representative of hyperbolic cross methods, the Frolov cubature formula is currently gaining center stage in the research of numerical inte- gration, since it has optimal convergence rates for a wide range of function spaces, especially the Besov and Triebel-Lizorkin spaces with dominating mixed smoothness. However, finding the integration nodes for this cuba- ture formula has proven to be quite difficult with increasing dimension. We introduce algorithms on this problem and will deal with the special case of Frolov-Chebychev lattices, which surprisingly turn out to be or- thogonal. Finally, we will compare the worst case error of this method to other up-to-date integration schemes. This is a joint work with J. Oettershagen and T. Ullrich.

21.03.16 13:00 (s.t), We6 6.020, Ilyass Tabiai (Montreal)

Damage initiation and propagation in fiber reinforced composites: observation and modeling

An overview of Fiber Reinforced Composites’ (FRCs) manufacturing an its consequences on the properties of the cured material will be presented. FRC are increasingly used for primary structures in the aerospace industry. FRC’s damage starts at the fiber-level then grows through the heteroge- neous material. Quantitative data of the deformation fields in the fiber’s vicinity during damage are essential to calibrate simulation models able to handle simultaneous initiation and growth of several damage mechanisms. After analysis of the results, an in-plane displacement resolution of ±0.1μm can be reached. Results from such experiments will be presented. A 1D Peridynamics’ based Prony series viscoelastic constitutive model has been developed. Such a model can be implemented into the existing Peridigm open-source software for Peridynamics’ simulations. Full-field measurements have been prepared for a 1D fiber, a 2D film and a 3D dogbone specimen under quasi- static tensile loading. A Peridynamics inverse method based on the quan- tification of energy available for each material point can be developed to implement and calibrate an energetical damage model.

Abstract (pdf)

31.03.16 13:00 (s.t), We6 6.020, Mario Ullrich (JKU Linz, Österreich)

On a Monte Carlo method for smooth functions

We present error bounds for a lattice rule for numerical integration equipped with a random shift and a random dilation of the point set. In particular, we show that the error of this method can be bounded by the L_2-norm of the Fourier transform away from the origin. This implies, e.g., that the worst-case error for mixed Sobolev spaces H^s_p with p>2 and s>0 is of the order n^{-s-1/2} without additional log-factors. Hence, the order is independent of the dimension. This seems to be the first non-trivial example where randomization can improve the performance of an algorithm by more than n^{-1/2}.

17.03.16 13:00 (s.t), We6 6.020, Yurii Kolomoitsev (NAS, Kiev)

Sharp Ulyanov inequalities in the spaces LpL_p, p>0p>0

In the talk we will discuss recent progress in investigation of the problem stated in 60s by Ulyanov: to find sharp inequalities between moduli of smoothness in the different Lebesgue spaces. We consider the general sharp Ulyanov inequalities for moduli of smoothness, generalized KK-functionals, and their realizations in the spaces Lp(Rd)L_p(\mathbb{R}^d). The main feature of our work is that we deal with the Ulyanov inequalities in the case 0<p<10 < p < 1 and in the limit cases p=1p = 1 and p=p=\infty.

17.03.16 14:00 (s.t), We6 6.020, Bangti Jin (UCR)

Variational formulation of problems involving fractional order differential operators

In this talk, we consider boundary value problems involving either Caputo or Riemann-Liouville fractional derivatives of order α(1,2)\alpha\in (1,2) on the unit interval (0,1). The variational problem for the Riemann-Liouville case is coercive on the space H0α/2H^{\alpha/2}_0 but the solutions are less regular, whereas that for the Caputo case involves different test and trial spaces. We establish the regularity pickup of the solutions of the variational problem, which enables one to establish convergence rates of the finite element approximations. Finally, numerical results are presented to illustrate the error estimates.

10.03.16 13:00 (s.t), We6 6.020, Sergey Tikhonov (CRM, Barcelona)

Polynomial inequalities with weights

In this talk we will discuss recent developments in the theory of polynomial inequalities with weights. An overview of the problem will be given. The talk does not require any prior knowledge of this topic.

10.03.16 14:00 (s.t), We6 6.020, Fabian Franzelin (Stuttgart)

On probabilistic transformations and optimal sampling rules for emulator based inverse uncertainty quantification problems

The generalized polynomial chaos expansion (gPCE) is a very attractive technique for uncertainty quantification. However, it assumes model inputs that are independent random variables. For non-independent random variables the basis polynomials of the gPCE are no longer orthogonal to the probability density, which decreases the efficiency of all PCE based methods. Moreover, they are not even applicable for data-driven problems, which arise in the context of data assimilation. A common way to overcome these issues are isoprobabilistic transformations, at least theoretically. This talk will discuss isoprobabilistic transformations such as the Nataf and the Rosenblatt transformation that decorrelate random variables for improving the efficiency of emulator based Bayesian inverse problems. We will present numerical studies that demonstrate the effectiveness of these transformations and present some alternatives such as Leja sequences in combination with arbitrary polynomial chaos expansion.

18.02.16 13:00 (s.t), We6 6.020, Winnifried Wollner (TU Darmstadt)

Optimization of partial differential equations subject to pointwise constraints on the gradient of the state

In many processes modeled by partial differential equations the size of derivatives plays a decicive role. Prototypical examples include problems of damage or plastificity. When optimizing such processes additional constraints on the size of the derivative need to be included. The analysis of such optimization problems is complicated by the mismatch between the natural topology for the derivative constraint and the, weaker, natural topology for solutions of the partial differential equation. The talk will be concernded with the existence of solutions to such problems as well as their approximation by numerical methods.

28.01.16 13:00 (s.t), We6 6.020, Ira Neitzel (Bonn)

Finite element error estimates for elliptic Dirichlet-boundary control problems with pointwise state constraints

PDE constrained optimal control problems with pointwise state constraints are known to cause certain theoretical and numerical difficulties. In this talk, we are concerned with an elliptic Dirichlet boundary control problem with low regularity of the state variable. After discussing first order optimality conditions in KKT-form, we will focus on presenting a priori error estimates for the finite element discretization of such linear-quadratic problems with pointwise state constraints in the interior of the domain.

This is joint work with M. Mateos.

21.01.16 13:00 (s.t), We6 6.020, Thomas Heller (Erlangen)

C++ on its way to exascale and beyond - The HPX Parallel Runtime System

The High Performance Computing (HPC) community is facing a technology shift which will result in a performance boost by three orders of magnitudes within the next 5 years. This rise of performance will mainly be acquainted by increasing the level of concurrency in such a way that a user of those systems needs to accommodate to billion way parallelism. The main problems to solve are: Programmability, Portability, Energy Saving and Resiliency. The author believes that leveraging modern C++ will lead to a possible solution to those problems from a software perspective. This talk will discuss the use of C++ in such a massively parallel environment: Using the HPX Parallel Runtime System - a future based API - - to present a lightweight and efficient mechanism to support massively parallel, multi-way parallelism.

14.01.16 13:00 (s.t), We6 6.020, Johannes Holke (Bonn)

Dynamic load-balancing on adaptive tetrahedral meshes

Load-balancing is essential to ensure scalability of parallel numerical applications. On adaptive meshes that are refined and coarsened frequently during the computation (i.e. in every time step) space-filling curves have been established as a fast and reliable tool for this task. We introduce a space-filling curve for adaptive triangular and tetrahedral meshes that can be computed using bitwise interleaving operations similar to the well-known Morton curve for cubical meshes. Using this space-filling curve we establish constant time algorithms to compute the parent, children and face-neighbors of a given mesh element, as well as the next and previous element in the space-filling curve.

17.12.15 13:00 (s.t), We6 6.020, Carsten Carstensen (HU Berlin)

Axioms of Adaptivity: Rate optimality of adaptive algorithms with separate marking

Abstract (pdf)

03.12.15 13:00 (s.t), We6 6.020, Dietrich Braess (Bochum)

A posteriori error estimates for discontinuous Galerkin methods by the two-energies principle

The two-energies principle admits to achieve reliable a posteriori error bounds without unknown constants. They are also efficient. The error bounds are p-robust while residual error estimates are not. We establish these estimates for discontinuous Galerkin methods (DG) that are now very popular. We use the uniform treatment by Arnold et al with a mixed method to construct equilibrated fluxes. The construction by a local procedure is here simpler than in the case of conforming Lagrange elements. This is a hint to other advantages of DG. We focus on the Poisson equation and comment the more involved treatment of the biharmonic equation.

26.11.15 13:00 (s.t), We6 6.020, V. N. Temlyakov (U South Carolina, Columbia)

Incoherent systems and covering in finite dimensional Banach spaces

We discuss construction of coverings of the unit ball of a finite dimensional Banach space. The well known technique of comparing volumes gives upper and lower bounds on covering numbers. This technique does not provide a construction of good coverings. Here we study incoherent systems and apply them for construction of good coverings. We use the following strategy. First, we build a good covering by balls with a radius close to one. Second, we iterate this construction to obtain a good covering for any radius. We mostly concentrate on the first step of this strategy.

19.11.15 13:00 (s.t), We6 6.020, Emil Kieri (Uppsala)

Dynamical low-rank approximation in the presence of small singular values

We consider low-rank approximations to large time-dependent matrices and tensors. When singular values in the low-rank approximation tend to zero the curvature of the approximation manifold grows, forcing standard time-stepping schemes to take very small time steps. This is a situation one has to expect. Novel time integrators, based on splitting the projection onto the tangent space of the low-rank manifold, were recently proposed. Standard error estimates for splitting methods break down when the singular values are small, since the Lipschitz constant of the projection then is large. On the other hand, the integrators are exact when given time-dependent matrices or tensors already are of the prescribed rank. We provide an error analysis which unifies these properties. We show that in cases where the exact solution is an epsilon-perturbation of a low-rank matrix or tensor train, the error of the integrator is favourably bounded in terms of epsilon and the stepsize, independently of the smallness of the singular values. Such a result does not hold for any standard integrator.

12.11.15 13:00 (s.t), We6 6.020, Thomas Kühn (Leipzig)

Approximation in multivariate periodic Gevrey spaces

The classical Gevrey classes, already introduced in 1916, play an important role in analysis, in particular in the context of PDEs. They consist of CC^\infty-functions on Rd\mathbb{R}^d whose derivatives satisfy certain growth conditions. For periodic functions, these growth conditions can be expressed in terms of Fourier coefficients. Using this approach, I will introduce periodic Gevrey spaces Gs,c(Td)G^{s,c}(\mathbb{T}^d) on the dd-dimensional torus Td\mathbb{T}^d, where s(0,1)s\in(0,1) is a smoothness parameter and c>0c>0 a fine index. All these spaces contain non-analytic functions, so they are between analytic functions and functions of finite smoothness. The talk is devoted to the study of approximation numbers ana_n of the embeddings Gs,c(Td)L2(Td) G^{s,c}(\mathbb{T}^d)\hookrightarrow L_2(\mathbb{T}^d), with special emphasis on the dependence of the “hidden” constants on the dimension dd. In particular, the exact asymptotic rate of ana_n as nn\to \infty is determined. Not surprisingly, this rate is sub-exponential and faster than polynomial. We also find preasymptotic two-sided estimates for small nn. In high dimensions this is the most relevant case for numerical purposes. These results are based on joint work with Martin Petersen (Leipzig).

05.11.15 13:00 (s.t), We6 6.020, Glenn Byrenheid (Bonn)

Non-optimality of rank-1 lattice sampling in spaces of mixed smoothness

We consider the approximation of functions with hybrid mixed smoothness based on rank-1 lattice sampling. We prove upper and lower bounds for the sampling rates with respect to the number of lattice points in various situations and achieve improvements in the main error rate compared to earlier contributions to the subject. This main rate (without logarithmic factors) is half the optimal main rate coming for instance from sparse grid sampling and turns out to be best possible among all algorithms taking samples on rank-1 lattices.

29.10.15 13:00 (s.t), We6 6.020, Christian Kuske (Bonn)

Multi-Frequency Analysis (Combining H\mathcal{H}-Matrices and MACA)

It is hard to find an approximation of the Helmholtz operator for an arbitrary frequency range. H\mathcal{H}-Matrices are feasible for fixed frequencies only. The aim of our approach is a method which is capable of generating an approximation for a frequency range out of few chosen frequencies. The basic idea originates in the so-called multivariate adaptive cross approximation (MACA). This method is a generalization of ACA to multivariate functions, preserving the logarithmic-linear complexity in both runtime and storage. MACA constructs a low-rank tensor approximation for the entire frequency range by approximating the chosen frequencies with H\mathcal{H}-Matrices. The resulting system can be solved with an adjusted tensor-valued Krylov subspace method and an available preconditioner.

18.06.15 13:00 (s.t), We6 6.020, David Molitor (Bonn)

Randomized dimensionality reduction for binary classification

We want to compare the success rate of a classification algorithm on data in Rd\mathbb{R}^d with the success rate on the same data, which has previously been projected onto Rk\mathbb{R}^k, kdk\leq d. To do this, we will use Johnson-Lindenstrauß embeddings and Support Vector Machines. Finally, we want to compare our numerical results with theoretic bounds proved in the recent paper “Compressive classification and the rare eclipse problem” by Bandeira, Mixon and Recht.

11.06.15 13:00 (s.t), We6 6.020, Dr. Andreas Mueller (Naval Postgraduate School, Monterey, CA, USA)

Parallel performance of the atmospheric model NUMA using element-based Galerkin methods

Even with today’s supercomputers it is still impossible to explicitly represent all scales involved in weather prediction. A large number of different numerical methods have been developed for improving the efficiency of numerical simulations. Among those methods are the element-based Galerkin methods which allow arbitrary high order for the spatial discretization and promise excellent scalability on large supercomputers.

The author is one of the developers of the dynamical core NUMA which is used within the next-generation weather prediction model NEPTUNE of the US Navy. NUMA incorporates many recent developments like element-based Galerkin methods and adaptive mesh refinement. The presentation gives an introduction to the solution of the Euler equations of gas dynamics with continuous (spectral element) and discontinuous Galerkin methods.

Recently, the author has shown strong scaling of NUMA on more than 2.7 million threads using the continuous Galerkin method. Two test cases are considered: 1) a baroclinic instability test case by Jablonowski and Williamson (2006) on the sphere using a cubed sphere mesh and 2) a 3D rising thermal bubble test case in a cube. Details of this scalability study are presented and the sensitivity of the results with respect to the polynomial order of the spatial discretization is discussed.

28.05.15 13:00, We6 6.020, Sebastian Mayer (Bonn)

The effect of sparsity and relaxations thereof in certain function approximation problems

Sparsity proved to be a highly useful property in modern numerical analysis, for instance, in the context of compressive sening or sparse grid methods. In this talk, I consider what effects sparsity assumptions and relaxations thereof have in connection with the reconstruction of multivariate functions from partial information. Here, I am particularly interested in understanding the effects from the viewpoint of information-based complexity. Two model function classes will be in the focus of the talk. On the one hand, we study classes of ridge functions f(x)=g(ax)f(x) = g(a \cdot x), where the ridge direction aa is assumed to be compressible. On the other hand, we consider periodic Sobolev classes, where Fourier frequency vectors are assumed to be compressible or to lie in a hyperbolic cross.

21.05.15 13:00, We6 6.020, Alfonso Caiazzo (WIAS Berlin)

Multiscale modeling of weakly compressible elastic materials in harmonic regime

This talk focuses on the modeling of elastic materials composed by an incompressible elastic matrix and small compressible gaseous inclusions, under a time harmonic excitation. In a biomedical context, this model describes the dynamics of a biological tissue (e.g. liver) when wave analysis methods (such as Magnetic Resonance Elastography) are used to estimate tissue properties. Due to the multiscale nature of the problem, direct numerical simulations are prohibitive. First, we describe an homogenized model for the time harmonic regime which describes the solid-gas mixture as a compressible material in terms of an effective elasticity tensor. As next, we derive and validate numerically analytical approximations for the effective elastic coefficients in terms of macroscopic parameters only. This simplified description is used to to set up an inverse problem for the estimation of the tissue porosity, using the mechanical response to external harmonic excitations.

07.05.15 13:00, We6 6.020, Daniel Elfverson (University of Uppsala)

A multilevel subset simulation for estimation of rare events

We want to estimate the probability that a functional is below or above some critical threshold. Standard Monte Carlo is unfeasible due to the huge amount of samples that are needed to produce even a single rare event. Remedies for this are using variance reduction techniques, e.g., subset simulation. In this work we consider a multilevel subset simulation approach with a varying number of samples on the different levels. The key idea in our new method is to use a posteriori error estimators to guarantee the critical subset property that may be violated when changing the model resolution from one failure set to the next.

07.05.15 13:30, We6 6.020, Fredrik Hellman (University of Uppsala)

A multilevel Monte Carlo method for failure probabilities

We analyze the multilevel Monte Carlo (MLMC) method for the irregular failure probability functional. The failure probability is the value of the cumulative distribution function for the quantity of interest at a given point. The low regularity makes MLMC perform suboptimally. The present work uses a posteriori error estimates for the quantity of interest to regain optimal performance. The issue of estimating the numerical bias of the estimator (which is an essential but difficult part of the MLMC algorithm) is also discussed.

23.04.15 13:00, We6 6.020, Lutz Gross (University of Queensland)

On the Solution of Large-scale Geophysical Inversion using Finite Element Methods on Parallel Computers

The inversion of geophysical data is the attempt to build a three-dimensional model of the Earth’s subsurface from data collected on or near the surface. The problem can be formulated as an optimization problem minimizing the defect of measurements and corresponding predictions subject to constraints from physical models in the form of partial differential equations. It is common practice to solve the problem using a ‘first discretize then optimize’ approach.. Although being very efficient for small to medium size problems the approach has strong limitations for large-scale data sets and non-quadratic inversion problems for instance for electro-magnetic data sets and joint inversion problems. In the talk we will present an alternative framework tailored for using the finite element method following the ‘first optimize then discretize’ approach. As this approach maintains the problem sparsity throughout the inversion the calculations can easily be applied to large spatial grids in parallel across thousands of processor cores with good scalability. We will discuss mathematical and computational aspects of the approach in the context of the inversion of magnetic and gravity anomaly data. As an application scenario we will present the inversion of satellite gravity data over the entire Australian continent.

12.03.15 14:00, We6 6.020, Vladimir N. Temlyakov (South Carolina)

Greedy algorithms in compressed sensing: Lebesgue-type inequalities for Chebyshev Greedy Algorithm smoothness

We study sparse representations and sparse approximations with respect to redundant dictionaries and bases. We address the problem of designing and analyzing greedy methods of approximation. A key question in this regard is: How to measure efficiency of a specific algorithm? Answering this question we prove the Lebesgue-type inequalities for algorithms under consideration. A very important new ingredient of the talk is that we perform our analysis in a Banach space instead of a Hilbert space. It is known that in many numerical problems users are satisfied with a Hilbert space setting and do not consider a more general setting in a Banach space. There are known arguments that justify interest in Banach spaces. In this talk we give one more argument in favor of consideration of greedy approximation in Banach spaces. We introduce a condition on a dictionary in a Banach space which is a generalization of the RIP concept in a Hilbert space. We analyze the Weak Chebyshev Greedy Algorithm, which is a generalization of the Orthogonal Greedy Algorithm (Orthogonal Matching Pursuit) for Banach spaces. We demonstrate the power of this method on several examples, including the trigonometric system.

12.03.15 15:00, We6 6.020, Dinh Dung (Hanoi)

Hyperbolic cross approximation in infinite dimensions

We give tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed Sobolev-Korobov-type smoothness and mixed Sobolev-analytic-type smoothness in the infinite-dimensional case where specific summability properties of the smoothness indices are fulfilled. These estimates are then applied to the linear approximation of functions from the associated spaces in terms of the ε\varepsilon-dimension of their unit balls. Here, the approximation is based on linear information. Such function spaces appear for example for the solution of parametric and stochastic PDEs. The obtained upper and lower bounds of the approximation error as well as of the associated ε\varepsilon-complexities are completely independent of any dimension. Moreover, the rates are independent of the parameters which define the smoothness properties of the infinite-variate parametric or stochastic part of the solution. These parameters are only contained in the order constants. This way, linear approximation theory becomes possible in the infinite-dimensional case and corresponding infinite-dimensional problems get tractable. This is joint work with M. Griebel (Bonn).

02.03.15 14:00, We6 6.020, Wolfgang Hackbusch (MPI Leipzig and CAU Kiel)

Recursive Low-Rank Truncation of Block-Structured Matrices

The best approximation of a matrix by a low-rank matrix can be obtained by the singular value decomposition. For large-sized matrices this approach is too costly. Instead one may use a block decomposition. Approximating the small submatrices by low-rank matrices and agglomerating them into a new, coarser block decomposition, one obtains a recursive method. The required computation work is O(rnm)O(rnm) where rr is the desired rank and n×mn\times m is the size of the matrix. The paper discusses the new error estimates for ABA-B and MAM-A where AA is the result of the recursive truncation applied to MM, while BB is the best approximation.

22.01.15 13:00, We6 6.020, Dirk Plüger und Fabian Franzelin (Stuttgart)

Sparse Grids: challenges from applications

In the first part of the presentation we will give an overview on typical challenges for sparse grid methods that are posed by applications. Think of black-box simulation codes on future exascale machines where one has limited access to code and numerics. Think of surrogate modeling for optimization or uncertainty quantification where every function evaluation is costly, or of data-driven problems where the amount of data is limited. These problems impose new requirements to sparse grids.

From data to uncertainty: an integrated sparse grid approach

In the second part we present a novel data-driven approach to propagate uncertainty. We use an integrated sparse grid approach for the uncertainty quantification pipeline from data to the stochastic analysis of the quantity of interest of some physical model. The integrated sparse grid approach gives us two main advantages: First, we can easily link the density estimation to the adaptive sparse grid collocation method to propagate uncertainty. Second, we can efficiently estimate the moments of the sparse grid surrogate without any additional approximation errors. We evaluate the new approach using an expensive subsurface flow simulation.

15.01.15, We6 6.020, Philipp Morgenstern (Bonn)

Analysis-suitable adaptive T-mesh refinement with linear complexity

At present, the main interest in IGA is in finding discrete function spaces that integrate well into CAD applications and, at the same time, can be used for Finite Element Analysis. One of the most promising approaches are T-splines, introduced as a free-form geometric technology in 2003. We present an efficient adaptive refinement procedure that preserves analysis-suitability of the T-mesh, this is, the linear independence of the T-spline blending functions. We prove analysis-suitability of the overlays and boundedness of their cardinalities, nestedness of the generated T-spline spaces, and linear computational complexity of the refinement procedure in terms of the number of marked and generated mesh elements.

08.01.15, We6 6.020, André Uschmajew (Bonn)

On low-rank approximabilty of solutions to Kronecker-structured operator equations

Let AA and BB be invertible, and let CC have low rank. Then it is trivial that the rank of the solution to the matrix equation AXBT=CA X B^T = C is not larger than the rank of CC. Formally speaking, this matrix equation admits a low-rank solution. There is some numerical evidence that the solution of frequently occurring linear equations like A1XB1T++ARXBRT=CA_1 X B_1^T + \dots + A_R X B_R^T = C, where the number of terms is still considerably smaller than the matrix dimension (which might be infinite in an Hilbert space setting), has good low-rank approximability, that is, fast decaying singular values. Exponential decay, as sometimes observed in practice, seems very challenging to prove. It is, however, rather easy to obtain a nontrivial algebraic decay rate by relating the convergence speed of some polynomial approximation to the Kronecker rank of the corresponding operator polynomial. For an eigenvalue problem AXBT=λXAXB^T = \lambda X a similar argumentation is possible, although with considerable restrictions. This simple method of estimating the low-rank approximation error for matrices has important consequences for the low-rank approximability of solutions to Kronecker-structured operator equations in higher-order tensor product spaces, as it provides estimates on the singular value decay of certain matrix unfoldings of the solution tensor, which in turn govern the error in higher-order SVD truncations. The talk is based on joint-work with Daniel Kressner.

18.12.14, We6 6.020, Beatrice Vedel (Bretagne-Sud)

Hyperbolic multifractal analysis and application to textures classification

Multifractal analysis has been proposed to describe the complexity of roughness in numereous phenomena - such as turbulent flows or financial data. In the first part of the talk, I will introduce the classical multifractal analysis and show how it can be applied on real data. In the second part, I will present our last works on a hyperbolic multifractal analysis able to describe roughness in presence of anisotropy.

04.12.14, We6 6.020, Van Kien Nguyen (Jena)

Weyl numbers of embeddings of Besov spaces with dominating mixed smoothness

Let XX be a Banach space and T:XXT:\, X\to X be a compact linear operator. By (λn(T))n=1(\lambda_n(T))_{n=1}^\infty we denote the sequence of all non-zero eigenvalues of TT. The particular interest in Weyl numbers comes from the fact that they are the smallest known ss-number satisfying the famous Weyl-type inequalities, i.e.,

(k=12n1λk(T))1/(2n1)2e(k=1nxk(T))1/n\Big( \prod_{k=1}^{2n-1} |\lambda_k (T)|\Big)^{1/(2n-1)} \le \sqrt{2e} \, \Big( \prod_{k=1}^{n} x_k (T)\Big)^{1/n} for all nNn \in \mathbb{N}.

The above inequality shows that Weyl numbers can be used to control the distribution of the eigenvalues of TT. In this talk we shall give a sharp estimates of Weyl numbers embeddings of Besov spaces of dominating mixed smoothness into Lebesgue spaces.

27.11.14, Seyedehsomayeh Hosseini (ETH Zürich)

Subgradient algorithms for locally Lipschitz functions on Riemannian manifolds

This talk is concerned with the numerical solution of optimization problems defined on Riemannian manifolds where the objective function may be nonsmooth. Such problems arise in a variety of applications, e.g., in computer vision, signal processing, motion and structure estimation, or numerical linear algebra. We present a descent direction method for finding extrema of locally Lipschitz functions defined on Riemannian manifolds. Then, we establish the convergence of our algorithm to a stationary point.

20.11.14, Anna Persson (Göteborg)

Multiscale techniques for parabolic equations

We use local orthogonal decomposition techniques to derive a generalized finite element method for a parabolic equation with multiscale coefficients. A review of existing results for elliptic multiscale problems is given and we discuss how to extend these to the parabolic case. We conclude by outlining the proof of optimal order convergence rate of the error in the L_2-norm (in space).

13.11.14, Mira Schedensack (HU Berlin)

Comparison results for first-order FEMs

This talk establishes the equivalence of the conforming P1P_1 finite element method, the nonconforming Crouzeix-Raviart finite element method, and several first-order discontinous Galerkin finite element methods in the sense that the respective energy error norms are equivalent up to generic constants and higher-order data oscillations in a Poisson model problem. The Raviart-Thomas mixed finite element method is better than the previous methods whereas the conjecture of the converse relation is proved to be false.

Similar results hold for linear elasticity. The error of two nonconforming first-order methods is bounded by the error of the conforming first-order method up to a multiplicative constant which is robust with respect to the incompressible limit while the converse estimates deteriorate for the Lam'e parameter λ\lambda\to\infty. This reflects the well-known locking behaviour of the conforming P1P_1 finite element method.

06.11.14, Henning Kempka (TU Chemnitz)

Function spaces with variable exponents

Lebesgue spaces with variable exponent Lp()L_{p(\cdot)} have been introduced already in 1931 by Orlicz. The recent interest on these spaces is a result in the upcoming applications in the theory of non-newtonian fluids and image processing. Surprisingly, Diening showed in 2004 that the Hardy-Littlewood maximal operator acts boundedly on Lp()L_{p(\cdot)} which was the functional anaytic breakthrough and started a growing interest in variable exponent Lebesgue spaces. Subsequently, many scientists are nowadays interested in function spaces with variable exponents and a lot of other function spaces (Sobolev, Hölder, Besov, Triebel-Lizorkin, Hardy, Morrey, …) with variable integrability and variable smoothness are in consideration today. We focus on the basic definitions of Lp()L_{p(\cdot)} and the Besov-Triebel-Lizorkin spaces Bp(),q()s()B^{s(\cdot)}_{p(\cdot),q(\cdot)} and Fp(),q()s()F^{s(\cdot)}_{p(\cdot),q(\cdot)} with all three indices variable and show some important properties in these scales.

30.10.14, Dietmar Gallistl (Bonn)

Adaptive finite element approximation of eigenvalue clusters

The adaptive finite element approximation of multiple eigenvalues leads to the situation of eigenvalue clusters because the eigenvalues of interest and their multiplicities may not be resolved by the initial mesh. In practice, also little perturbations in coefficients or in the geometry immediately lead to an eigenvalue cluster. The first part of this talk presents an adaptive algorithm for eigenvalue clusters and shows optimal decay rates of the error in terms of degrees of freedom. Numerical tests suggest that the adaptive cluster approximation is superior compared to the use of an adaptive scheme for each eigenvalue separately, even if the multiplicities are known.

The optimality in terms of the concept of nonlinear approximation classes is concerned with the approximation of invariant subspaces spanned by eigenfunctions of an eigenvalue cluster. In order to obtain eigenvalue error estimates in the case of nonconforming finite element approximations, the second part of the talk presents new error estimates for nonconforming finite elements which relate the error of the eigenvalue approximation to the error of the approximation of the invariant subspace.

23.10.14, Holger Rauhut (RWTH Aachen)

Numerical solution of high-dimensional parametric operator equations via compressive sensing

Compressive sensing enables accurate recovery of approximately sparse vectors from incomplete information. We apply this principle to the numerical solution of parametric operator equations where the parameter domain is high-dimensional. In fact, one can show that the solution of certain parametric operator equations (parametric PDEs) is analytic in the parameters which can be exploited to show convergence rates for nonlinear (sparse) approximation. Building on this fact, we show that methods from compressive sensing can be used to compute approximations from samples (snapshots) of the parametric operator equation for randomly chosen parameters, which in turn can be computed by standard techniques including Petrov-Galerkin methods. We provide theoretical approximation rates for this scheme. This is joint work with Christoph Schwab.

14.07.14, Toni Volkmer (TU Chemnitz)

Approximation of high-dimensional multivariate periodic functions based on rank-1 lattice sampling

We consider the approximation of multivariate periodic functions ff belonging to subspaces of the Wiener algebra. For the approximation of ff, a trigonometric polynomial pp with frequencies supported on an index set IZdI\subset\mathbb{Z}^d is used and the Fourier coefficients of pp are computed by sampling the function ff along a rank-1 lattice. For this computation, a fast algorithm can be used, which is based mainly on a single one-dimensional fast Fourier transform, and the arithmetic complexity of the algorithm is O(I2logI+dI)\mathcal{O}(|I|^2 \log |I| + d|I|). We show upper bounds for the approximation error and achieve an optimal order of convergence, when using suitable frequency index sets II. Numerical results are presented up to dimension d=10d=10, which confirm the theoretical findings.

This is joint work with Lutz Kämmerer and Daniel Potts.

Abstract (PDF)

14.07.14, Lutz Kämerer (TU Chemnitz)

Sampling multivariate trigonometric polynomials and smooth periodic functions along rank-1 lattices

The approximation of functions ff in dd spatial dimensions by sparse multivariate trigonometric polynomials supported on known frequency index sets IZdI\subset\mathbb{Z}^d is an important task with a variety of applications. We are interested in computing such an approximation from function values in a fast way. Rank-1 lattices as sampling scheme allow for the fast approximation of the Fourier coefficients f^k\hat{f}_{\mathbf{k}}, kI\mathbf{k}\in I, of ff using a single one-dimensional fast Fourier transform.

For fixed arbitrary frequency index sets II and suitable rank-1 lattice size MM, we apply a simple component-by-component construction in order to determine a generating vector zZd\mathbf{z}\in\mathbb{Z}^d, such that sampling along the rank-1 lattice Λ(z,M)\Lambda(\mathbf{z},M) allows for the exact reconstruction of all trigonometric polynomials with frequencies supported on the index set II. We discuss necessary and sufficient conditions on the rank-1 lattice size MM guaranteeing that our component-by-component strategy succeeds.

Once we determined a reconstructing rank-1 lattice Λ(z,M)\Lambda(\mathbf{z},M) for the frequency index set II, we can use this as sampling scheme in order to approximate smooth periodic functions f=kZdf^ke2πikf=\sum_{\mathbf k\in\mathbb{Z}^d}\hat{f}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ} by trigonometric polynomials p=kIp^ke2πikp=\sum_{\mathbf k\in I}\hat{p}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ} with frequencies supported on II. Assuming the exact Fourier partial sum SIf:=kIf^ke2πikS_If:=\sum_{\mathbf k\in I}\hat{f}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ} of ff is a good approximation of ff, the approximation S~If:=kIf~^ke2πik\tilde{S}_If:=\sum_{\mathbf k\in I}\hat{\tilde{f}}_{\mathbf{k}}\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}\cdot\circ} of ff, f^kf~^k=j=0M1f(jMz)e2πijM1kz\hat{f}_{\mathbf{k}}\approx\hat{\tilde{f}}_{\mathbf{k}}=\sum_{j=0}^{M-1}f(\frac{j}{M}\mathbf{z})\mathrm{e}^{-2\pi\mathrm{i}jM^{-1}\mathbf{k}\cdot\mathbf{z}}, kI\mathbf{k}\in I, is in some sense (almost) as good as the exact Fourier partial sum SIfS_If.

Furthermore, we discuss the relation between the reconstruction of trigonometric polynomials and the approximation properties of the presented sampling method in detail. We show that the reconstruction property of the rank-1 lattice is necessary in order to achieve approximations S~If\tilde{S}_If of ff that allow for upper bounds on fS~Iff-\tilde{S}_If in the same order of the error bounds of fSIff-S_If in general.

11.07.14, Peter Oswald (Jacobs University, Bremen)

Stable space splittings: New developments

I review the Hilbert space concept of stable space splittings and address its connection with some current research topics such as randomized POCS algorithms for least-squares problems.

11.07.14, Weiqi Zhou (Jacobs University, Bremen)

Convergence rate of the classical Kaczmarz method

The Kaczmarz method (also referred to as “ART”) is an easy to implement and convergent iterative method for solving rectangular linear systems. The method carries out alternating ortho-projections and can be analyzed within the framework of POCS methods or using its interpretation as Gauss-Seidel method on an equivalent system

We review the development and main results of this method and present an upper bound for the convergence rate of its classic version, which, in contrary to common perception, turns out to be not much worse than the convergence rate estimates of its recently proposed randomized versions. Numerical examples about the impact of ordering on the convergence rates will also be presented if time permits

07.07.14, Jan Kleinert (Fraunhofer IWTM, Kaiserslautern)

Nonsmooth Rigid Body Dynamics – Simulating Granular Material

The field of Nonsmooth Contact Dynamics (NSCD) provides an increasingly popular simulation framework for granular material. In contrast to penalty-based Discrete Element Methods (DEM), this approach is stable for arbitrary time steps and produces visually acceptable results in very short computing time. Yet when it comes to the prediction of draft forces, NSCD relies on powerful numerical solvers.

This talk sets out to discuss NSCD as an alternative to DEM. The physical model behind NSCD is laid out, starting from Hamilton’s principle of least action to the discretized equations of motion. To integrate these equations in time, it is necessary to solve a large Cone Complementarity Problem (CCP) per time step. Two numerical methods to solve the CCP are introduced. The first is the Projected Gauß-Jacobi solver, which is widely used for the simulation of granular material. But it suffers from slow convergence when applied to large systems. The second technique, an Interior Point Method based on Jordan algebras, exhibits better convergence properties whenever the accuracy requirements are high.

30.06.14, Lei Zhang (Shanghai Jiao Tong University)

Construction and analysis of consistent energy based atomistic/continuum coupling methods

We discuss the construction and numerical analysis of energy based atomistic/continuum coupling methods (A/C Coupling) for modeling crystalline solids with defects, in particular, the issues of consistency (so-called ‘ghost force removal’) and stability of the coupling method. For general multi-body interactions on the 2D triangular lattice, we show that ghost force removal (patch test consistent) a/c methods can be constructed for arbitrary interface geometries[1]. Moreover, we prove that all methods within this class are first-order consistent at the atomistic/continuum interface and second-order consistent in the interior of the continuum region. The convergence and stability of the method is analyzed and justified with numerical experiments[2,3]. Development of optimal implementation for consistent methods is dicussed [3].

[1] Construction and sharp consistency estimates for atomistic/continuum coupling methods with general interfaces: a 2D model problem, C. Ortner, L. Zhang, SIAM Numerical Analysis. Volume 50, Issue 6, pp. 2940-2965 (2012).

[2] (In-)Stability and Stabilisation of QNL-Type Atomistic-to-Continuum Coupling Methods, C. Ortner, A. Shapeev, L. Zhang, 2013, Accepted by SIAM Multiscale Model. Simul.

[3] Implementation of Geometric Reconstruction Based Atomistic-to-Continuum Coupling, C. Ortner, L. Zhang, 2013, Accepted by Computer Methods in Applied Mechanics and Engineering

23.06.14, Volodya Temlyakov (University of South Carolina)

Bilinear and multilinear approximation of functions

We are interested in approximation of a multivariate function f(x1,,xd)f(x_1, \ldots, x_d) by linear combinations of products u1(x1)ud(xd)u_1 (x_1 ) \cdots u_d(x_d ) of univariate functions uj(xj)u_j (x_j ), j=1,,dj = 1, \ldots , d. In the case d=2d = 2 it is a classical problem of bilinear approximation. In the case of approximation in the L2L_2 space the bilinear approximation problem is closely related to the problem of singular value decomposition (also called Schmidt expansion) of the corresponding integral operator with the kernel f(x1,x2)f(x_1 , x_2 ). We will discuss some known results on the rate of decay of errors of best bilinear approximation in LpL_p under different smoothness assumptions on ff. The problem of multilinear approximation in the case d3d \geq 3 is more difficult and much less studied than the bilinear approximation problem. We will present some known and recent results on best multilinear approximation in LpL_p under mixed smoothness assumption on ff.

16.06.14, Benjamin Doerr (Ecole Polytechnique, Paris)

The discrepancy of random points

In this talk, we discuss the star discrepancy of a random point set. We show that there are constants k,K>0k, K > 0 such that for all N,sNN, s \in \mathbb{N}, sNs \le N, the point set consisting of NN points chosen uniformly at random in the ss-dimensional unit cube [0,1]s[0,1]^s with probability at least 1eks1 - e^{-ks} admits an axis-parallel rectangle [0,x][0,1]s[0,x] \subseteq [0,1]^s containing KsNK \sqrt{sN} points more than expected. Together with the famous upper bound by Heinrich, Novak, Wasilkowski, and Wo{'z}niakowski (2001), our result shows that the expected star discrepancy of a random point set is of order s/N\sqrt{s/N}. Towards the end of the talk, we shall discuss to what extent these result also hold for spherical cap discrepancies.

26.05.14, Aviv Gibali (Fraunhofer IWTM, Kaiserslautern)

Projection methods - a powerful tool for real-world problems and combinatorial games

Projection methods are iterative algorithms that use projections onto sets while relying on the general principle that when a family of sets is present, then projections onto the given individual sets are easier to perform than projections onto other sets (intersections, image sets under some transformation, etc.) that are derived from the given individual sets.

Their robustness, low computational effort and their ability to handle huge-size problems make them very useful for many convex real-world problems such as Image Reconstruction (IR) and Intensity- Modulated Radiation Therapy (IMRT) Treatment Planning. Over the past decade, some projection methods have proven very effective also in some non-convex and discrete settings such as combi- natorial games.

In this talk we describe several types of projection methods and illustrate their performances both for convex problems such as IR and IMRT, and also for non-convex problems such as the 8 queens puzzle and Sudoku.

26.05.14, Jan Schwientek (Fraunhofer IWTM, Kaiserslautern)

Chebyshev approximation and semi-infinite programming

In applications one often seeks to approximate a given continuous func- tion FF on a non-empty and compact set ZRnZ \subset \mathbb R^n by a simpler function f(p,)f(\textbf p, \cdot) which can be chosen from a parametric family of continuous functions {f(p,)pP}\{f (\textbf p, \cdot) | \textbf p \in P \} with some parameter set PRmP \subset \mathbb R^m. Depending on the application different norms may be used to measure the deviation between FF and f(p,)f(\textbf p, \cdot) on ZZ. If one wishes to minimize the maximal deviation, the Chebyshev norm has to be used. This leads to the non-smooth problem of Chebyshev approximation: CA ⁣:minpPF()f(p,),ZCA \colon \min_{p \in P} \|F (\cdot) - f (\textbf p, \cdot)\|_{\infty,Z}. Via the epigraph reformulation CA can be rewritten as a smooth standard semi-infinite programming (SIP) problem.

In engineering applications a modification of CA, termed reversed Chebyshev approximation, has received interest. There, given an approx- imating family of functions {f(p,)pP}\{f(\textbf p, \cdot) | \textbf p \in P \} and a desired precision e(p,q)e(p, q), the aim is to find parameter vectors pPp \in P and qQq \in Q such that the domain Z(q)Z(q) is as large as possible without exceeding the approximation error e(p,q)e(p, q): RCA:max(p,q)P×QVol[Z(q)]s.t.F()f(p,),Z(q)e(p,q)RCA : \quad \max_{(p,q) \in P \times Q} \text{Vol}[Z(q)] s.t. \|F (\cdot) - f (p, \cdot)\|_{\infty,Z(q)} \leq e(p, q). This problem can be reformulated as a semi-infinite programming prob- lem, too, but it results in a general(ized) semi-infinite programming (GSIP) problem. In this talk we give an overview about semi-infinite programming (the- oretical foundations, solution methods and further applications) and show how (reverse) Chebyshev approximation problems can be solved by that. Furthermore, we present a real-world application of Chebyshev approxi- mation, namely the uniform coverage problem in spunbond production, and illustrate its solution as semi-infinite programming problem by a numerical example.

19.05.14, Christiane Helzel (HHU Düsseldorf)

High-order WENO finite volume methods for Cartesian grids

High-order WENO (i.e., weighted essentially non-oscillatory) methods are widely used for the approximation of hyperbolic partial differential equations. A common approach to use WENO methods on multidimensional Cartesian grids consists in applying a one-dimensional WENO method in each direction. This spatial discretization is typically combined with a Runge-Kutta method in time, i.e., during each stage of a Runge-Kutta method one-dimensional WENO schemes are used in a dimension-by-dimension fashion.

However, it is known that finite volume WENO methods based on a dimension-by-dimension approach retain the full order of accuracy (of the one-dimensional method) for linear multidimensional problems, but they are only second order accurate for the approximation of nonlinear multidimensional problems.

In my talk, I will present a simple modification of finite volume WENO methods, which leads to the full spatial order of accuracy by using only one-dimensional polynomial reconstructions in a dimension-by-dimension approach. Furthermore, I will discuss the use of this method on adaptively refined grids as well as on Cartesian grids with embedded boundaries.

This is recent joint work with Pawel Buchmüller and Jürgen Dreher.

12.05.14, Hieu Nguyen (Edinburgh)

Combining domain decomposition and adaptivity: a study for additive Schwarz methods

Domain decomposition refers to the splitting of a PDE into coupled subproblems on smaller subdomains forming a partition of the original domain. It is the most popular approach to solve large-scale problems on parallel supercomputers as the subproblems can be solved largely independently on different processors. When domain decomposition is combined with adaptivity, there are challenges in maintaining a good load balancing and mesh conformity among subdomains. These challenges can be overcome using the Bank-Holst parallel adaptive meshing paradigm. In this talk, we will formulate and analyze new additive Schwarz methods for this paradigm. Our methods take advantages of the local adaptive meshes of the whole domain which are fine in their associated subdomains but generally much coarser elsewhere. We will show that these methods are optimal, i.e, their rates of convergence do not depend on the mesh sizes or the number of subdomains. Numerical results will be provided to support our theoretical findings.

05.05.14, Aicke Hinrichs (Rostock)

Complexity of Banach space valued integration in the randomized setting

We study the complexity of Banach space valued integration problems in the randomized setting. We explain the interplay between the geometry of the underlying Banach space and the rate of the nn-th minimal errors. The relevant notion of Banach space geometry is the Rademacher type of the space. We also explain the motivation which comes from applications to the complexity of definite and indefinite integration and to the solution of ODEs. This is joint work with Stefan Heinrich.

28.04.14, Mario Ullrich (Jena)

On integration in Sobolev spaces using Frolov’s method

We present the method of Frolov for the integration of (periodic) functions in the Sobolev space HmixsH^s_{\rm mix}, sNs\in\mathbb{N}, of dominating mixed smoothness. This method is completely constructive and has the optimal order of convergence for functions with support (strictly) contained in the unit cube. A standard modification of the algorithm also leads to the optimal order for non-periodic functions. Due to the simplicity of the method, we will present the whole proof.

03.04.14, Dinh Zung (Hanoi)

Sampling Recovery and Cubature on Sparse Grids Based on a B-spline Quasi-Interpolation

Let Xn={xj}j=1nX_n = \{x_j\}^n_{j=1} be a set of n points in the dd-cube [0,1]d[0, 1]^d , and Φn={φj}j=1n \Phi_n = \{\varphi_j \}^n_{j=1} a family of nn functions in the space Lq([0,1]d)L_q([0, 1]^d), 0<q0 < q \leq \infty. We consider the approximate recovery in Lq([0,1]d)L_q([0, 1]^d) of functions ff on [0,1]d[0, 1]^d from the sampled values f(x1),,f(xn)f (x_1), \dots, f(x_n), by the linear sampling algorithm Ln(Xn,Φn,f):=j=1nf(xj)φjL_n (X_n , \Phi_n , f ) := \sum_{j=1}^n f(x_j) \varphi_j The error of sampling recovery is measured in the norm of the space Lq([0,1]d)L_q([0, 1]^d)-norm or the energy norm of the isotropic Sobolev space Wqγ([0,1]d)W_q^{\gamma} ([0, 1]^d) for 0<q0 < q \leq \infty and γ>0\gamma > 0. Functions ff to be recovered are from the unit ball in Besov type spaces of an anisotropic smoothness, in particular, spaces Bp,θaB^a_{p,\theta} of a nonuniform mixed smoothness aRda \in \mathbb R^d , and spaces Bp,θα,βB^{\alpha, \beta}_{p,\theta} of a “hybrid” of mixed smoothness α>0\alpha > 0 and isotropic smoothness βR\beta \in \mathbb R. We constructed optimal linear sampling algorithms Ln(Xn,Φn,)L_n(X_n^* , \Phi_n^* , \cdot) on special sparse grids XnX_n^* and a family Φn\Phi_n^* of linear combinations of integer translated dilations of tensor products of B-splines. We computed the asymptotic of the error of the optimal recovery. As consequences we obtained the asymptotic of optimal cubature formulas for integration of functions from from the unit ball of these Besov type spaces.

20.01.14, Jens Oettershagen (Bonn)

Integration of bivariate periodic functions with bounded mixed derivatives

We investigate integration in certain reproducing kernel Hilbert spaces of bivariate functions that possess bounded mixed derivatives up to order r. We compare worst-case errors of full tensorproducts of the rectangle rule, sparse grids, Hammersley QMC and the Fibonnacci lattice rule, with the latter being superior to all the other approaches.

03.02.14, Glenn Byrenheid (Bonn)

Optimal sampling of d-variate periodic functions with respect to energy sparse grids

Motivated by Yserentant’s results for the regularity of eigenfunctions of the electronic Schrödinger operator, it seems useful to study the approximation behavior of functions belonging to a space of dominating mixed smoothness. We are interested in finding good approximations of functions with this kind of regularity from finite dimensional subspaces, where the error is measured in the so-called energy-norm H1H^1. We propose subspaces (energy-norm sparse grids) and corresponding linear mappings which yield the optimal asymptotic error rate (approximation numbers) in terms of the number of degrees of freedom for periodic d-variate functions with mixed smoothness s>1s>1. Moreover, the mentioned linear mappings only consider function evaluations of the function ff to be approximated. In particular, we obtain sharp bounds for sampling numbers in this context, which coincide with the corresponding estimates of the approximation numbers of the respective embedding. The rate is poynomial of order s1s-1 without logarithmic perturbance which would occur for classical sparse grid methods. Our proofs rely on classical Bernstein and discrete Hardy type inequalities to obtain dyadic sampling representations, i.e., equivalent norms in the source space where only function values are questioned. The results are part of a joint work with W. Sickel (Jena), Dinh Dung (Hanoi) and Tino Ullrich (Bonn).

27.01.14, Karoline Köhler (HU Berlin)

Efficient and Reliable Error Control for the Obstacle Problem

This talk presents some results for the error analysis of the obstacle problem, which is the protoypical example of a variational inequality. The first part of the talk deals with a general reliable and efficient error estimate, independent of any specific finite element method. Therein, the efficiency is understood in terms of the error uvu-v in H01(Ω)H^1_0(\Omega) for the exact solution uu and an approximation vv in H01(Ω)H^1_0(\Omega) in the primal variable and the error λμ\lambda-\mu for the exact Lagrange multiplier λ\lambda and an approximation μ\mu in the dual space H1(Ω)H^{-1}(\Omega). This leads to the equivalence [|||u-v|||+|||\lambda-\mu|||_*\approx \text{computable terms}] possibly up to multiplicative generic constants. This general result is applied to the conforming Courant finite element method (FEM) in the second part of the talk, where the efficiency is analysed beyond the aforementioned equivalence and leads to [|||u-v|||\approx \text{computable terms}.] Numerical experiments which demonstrate and highlight the results for various finite element methods conclude the talk.

13.01.14, Fabian Franzelin (Stuttgart)

Spatially adaptive sparse grid collocation for multivariate peridynamic simulations

Peridynamics is an accepted method in engineering for modeling crack propagation on a macroscopic scale. However, the sensitivity of the method to two important model parameters — elasticity α\alpha and the particle density—has not yet been studied. To make an efficient computation of sensitivity values possible, we used the adaptive sparse grid collocation method (ASGC). Motivated by the works of Silling and Owhadi we applied it to a simulation of a high-speed projectile impacting against a plate using peridynamics. We introduced the force of magnitude KK exerted by the indenter as one additional uncertain parameter. The scenario requires custom-tailored criteria for adaptive refinement to reduce the number of sampling points as well as possible. The results obtained by the stochastic collocation method using spatially adaptive sparse grids are evaluated and compared to previous methods in this context.

17.12.13, Philipp Morgenstern (Bonn)

Local refinement of regular quadrilateral meshes

Adaptive Finite Element Methods (AFEM) are commonly used for partial differential equations that require high resolution in small regions of the domain (due to local singularities such as re-entering corners) while keeping a coarse mesh on other parts of the domain. This requires an algorithm for adaptive mesh refinement which preserves the shape of the elements as well as the regularity of the mesh. Moreover, theoretical purposes such as optimality of the convergence rates require an algorithm that allows for overlays and has linear memory complexity. We introduce a novel algorithm for the local refinement of regular quadrilateral meshes. This algorithm preserves the shape regularity of the initial mesh, allows the construction of overlays (this is, the coarsest common refinement of two meshes) and comes with linear complexity in terms of memory. The latter means that the number of new quadrilaterals in the fine mesh is essentially bounded by the number of atomic markings in the preceeding meshes.

03.12.13, Lev Markhasin (Stuttgart)

Discrepancy and integration in function spaces with dominating mixed smoothness

Optimal lower bounds for discrepancy in Besov spaces with dominating mixed smoothness are known from the work of Triebel. Hinrichs proved upper bounds in the plane. Chen-Skriganov point sets are known to be optimal in the sense of Lp-discrepancy, but they also satisfy the optimal upper bounds for discrepancy in Besov spaces with dominating mixed smoothness for certain parameters. Results for Triebel-Lizorkin and Sobolev spaces with dominating mixed smoothness and for the integration error can be concluded.

26.11.13, Sebastian Mayer (Bonn)

Dimensionality reduction in machine learning

In this talk, I survey the usage of (randomized) dimensionality reduction in machine learning, with special emphasis being put on Johnson-Lindenstrauss embeddings (JLE). First, I outline a number of applications, including approximate nearest neighbor and classification tasks. There, dimensionality reduction has played a major role to make computations possible in feasible time. With that said, I delve into the theoretical foundations of Johnson-Lindenstrauss embeddings. This will put us into the position to understand why randomized dimensionality reduction has been a success in several situations. But at the same time, it will also help us to understand why it has little chance to work in others.