

Timo Betcke
Solving Planar Eigenvalue Problems with the Generalized
Singular Value Decomposition
One approach to solve the Laplace eigenvalue problem on a planar
domain with zero Dirichlet boundary conditions is to approximate the
eigenfunctions from a space of particular solutions that satisfy the
eigenvalue equation but not necessarily the boundary conditions. This
Method of Particular Solutions was made famous by Fox, Henrici and
Moler in 1967 who used it to compute the first 10 eigenvalues
on the Lshaped domain. Unfortunately, their method becomes unstable
for more complicated domains including domains with several corner
singularities.
In this talk we show how the reformulation of the Method of Particular
Solutions as a generalized singular value problem leads to a stable
method to compute eigenvalues and eigenfunctions on a variety of
domains. The high accuracy and stability of this approach is discussed
and a geometric interpretation of the generalized singular values in
connection with the Laplace eigenvalue problem is given.
This is joint work with L. N. Trefethen.

WolfJürgen Beyn
Continuation and LargeScale Eigenvalue Problems
Spatial discretizations of time dependent partial differential
equations typically lead to dynamical systems of large dimension
with vector fields that have sparse Jacobians. When following
branches of steady states for such systems the relevant stability
information can be read off from the eigenvalues of the Jacobians.
Since one is, usually, only interested in a small critical subset
of the spectra this leads to the problem of continuation
for lowdimensional invariant subspaces of parametrized large
matrices.
In the talk we give an overview of approaches and algorithms
for solving this problem. Special issues discussed are:
 Locate the small spectral subset and control the
dimension
of the invariant subspace during continuation
 Make efficient use of (iterative or direct)
black box solvers
for the linear systems involved
 Synchronize the continuation of invariant subspaces
with that of the original bifurcation problem.
We will also discuss how these methods can be extended to
generalized and quadratic eigenvalue problems and
show some examples from reaction diffusion systems.

Matthias Bollhöfer
Eigenvalue Computation Methods for the Anderson Model
of Localization
We propose efficient preconditioning algorithms for an eigenvalue
problem arising in quantum physics, namely the computation of a few
interior eigenvalues and their associated eigenvectors for the
large scale sparse real and symmetric indefinite matrices of the
Anderson model of localization.Our preconditioning approaches for
the associated shiftandinvert systems are based on maximum
weighted matchings and algebraic multilevel incomplete $LDL^T$
factorizations.
Our numerical examples indicate that recent algebraic multilevel
preconditioning solvers can accelerative the computation of
the underlying largescale eigenvalue problem by several orders
of magnitude compared with previous approaches.

Stefan Borovac
Graph Based Reordering and the Convergence of Multiplicative
Schwarz Methods for Singular Systems
It is known that the convergence of multiplicative
Schwarz methods is not guaranteed for singular
systems. We present a graph based method that allows
one to find a partition of the underlying system and
an order of the operators such that convergence will
be achieved.
Those reorderings are applicable to irreducible
singular Mmatrices and stochastic matrices.

Angelika BunseGerstner
Preconditioned LSQR for Discrete IllPosed Problems
A modified version of the twolevel preconditioned iterative
method is presented. The twolevel Schur complement CG is
applied We propose here the application of the
on the unregularized problem and the introduction of the
regularization process for solving only one of the linear
systems produced by the algorithm. The modified
algorithm is substantially cheaper and numerical examples
show similar approximations in both cases. A novel basis
for the coarse subspace is incorporated in the analysis.
Numerical experiments for some test problems and a
practical scattering problem are presented.

Jurjen Duintjer Tebbens
Preconditioning of Sequences of Large, Sparse, Nonsymmetric
Linear Systems
The talk addresses the following problem: Given a sequence of linear
systems and a preconditioner for the initial system, how
can we update this preconditioner in order to obtain both accurate
and cheaply computable and applicable preconditioners for the
systems to follow ? We present some options to achieve this for
sequences of large, sparse, nonsymmetric linear algebraic
equations, such as those arising from nonlinear discretized
partial differential equations.
The presented work is joint work with Miroslav Tuma.

Kolja Elßel
Modal Reduction for Nonlinear Eigenvalue Problems
The modal condensation method AMLS has been succesfully applied
to large sparse generalized eigenvalue problems originating
from vibrational analysis in structural mechanics. We have
modified and applied this method to a number of different
nonlinear eigenvalue problems. In this presentation we give
a brief overview of the techniques we used during our studies
and some numerical results.
This is a joint work with Heinrich Voss.

Oliver Ernst
Arnoldi for Matrix Functions: Algorithms
We present recent work on a restarting technique for
Krylov subspace approximations of a matrix function times a
vector.
While such restarting approaches are straightforward for
Krylov methods for solving linear systems 
the case f(z)=1/z  this is not the case for general,
particularly nonrational functions.
We derive such a general restarting scheme and
discuss its stable implementation.

Sigrid Fischer
Solving Complex Symmetric Linear Systems 
Efficient Preconditioners for the CSYM Iterative Method
Among the solvers that use and preserve the symmetric structure
of the system are the CSYM method,
developped by BunseGerstner/Stöver [1992]
and complex symmetric variants of the QMR method (CQMR).
The CSYM method has two advantages over the CQMR method:
it has the desirable minimal residual
property and breakdowns do not occur.In this talk we present
theoretical and practical results concerning appropriate
preconditioners for the CSYM method.

Drahoslava Janovská
Fast Givens' Reduction of a QuaternionValued Matrix
For a reduction of a quaternion valued matrix, a product
of Givens rotations for each column has to be computed.
The number of real operations necessary to perform this
reduction is relatively large. One of the way how to reduce
the number is to apply the Fast Givens algorithm.

Vladimír Janovský
On Computing Analytic Singular Value Decomposition
The aim is to trace selected singular values and the relevant
left/right singular vectors. A pathfollowing predictorcorrector
technique is considered: The problem is formulated as a
contnuation of a set of overdetermined nonlinear equations
that depend on a parameter. Singularities on the path will
be investigated.

Daniel Kressner
Structured Condition Numbers and Iterative Refinement for Invariant Subspaces
Based on orthogonal decompositions of Sylvester operators, a
perturbation analysis for invariant subspaces of structured
matrices and deflating subspaces of matrix pencils is presented.
This analysis leads to simple expressions for the
corresponding structured condition
numbers and structurepreserving correction methods.
The structures
that can be covered by this analysis include Hamiltonian,
product, orthogonal, symplectic and palindromic eigenvalue
problems. This is joint work with Ralph Byers.

Peter Langer
The JacobiDavidson Method for Calculating Rovibronic
Energy Levels of Triatomic Molecules
The numerical solution of the timeindependent
Schrödinger equation has a prominent role to
play in quantum chemistry. The eigenvalues of
the (molecular) Hamiltonian operator correspond to
the energy levels of the molecule under consideration.
The usual approach is to consider
a finitedimensional matrix eigenvalue problem instead
(obtained by means of RayleighRitz projection). Since the
dimension of the arising matrices can be quite large
(n > 60000) and since
one is typically only interested in a moderate
number k << n of the lowest eigenvalues, iterative
eigensolvers  especially the JacobiDavidson method 
seem to be a natural choice to address the problem.
In this talk we will show how the JacobiDavidson method
can sucessfully be applied in this context (including aspects
such as effective matrixvector multiplication, suitable
preconditioners and information extraction).

Christian Mehl
Three Methods for the Solution of Palindromic Eigenvalue Problems
This talk is canceled.

Klaus Neymeyr
Preconditioning in Eigenvalue Computations
The topic of this survey talk is the numerical solution of
extremely large and illconditioned eigenvalue problems by
means of preconditioned iterations.
For these eigenproblems, which derive from the finite element
discretization of selfadjoint elliptic partial differential
operators, we consider ``matrixfree''
iterations. Typically the discretization and the mass
matrix are not needed explicitly, but only matrixvector
products are to be provided.
In many instances preconditioning for the eigenvalue problem
is used in combination with innerouter iterations. The outer
iteration is a classical shiftandinvert
scheme and the inner loop is a preconditioned linear system
solver. Even more efficient are preconditioned
Krylovtypeiterations in a form adapted
to the eigenvalue problem. These methods take advantage of the
RayleighRitz procedure as an effective minimizer.
In the talk several iterations are systematically derived by
a comparison of the boundary value and of the eigenvalue problem.
The best preconditioned solvers for the eigenvalue problem can
reach an effectiveness comparable to that known from
preconditioned conjugate gradient iterations.

Kathrin Schreiber
A New HalfStep JacobiDavidson Like Update Formula of Order 2.62
for Nonlinear Eigenvalue Problems
Starting from a bifurcationpoint based JacobiDavidson like
twosided method we introduce a new version for nonlinear
eigenvalue problems T(μ)x=0.
The new algorithm involves solving an extra RayleighQuotient
before solving the dual equation with new updates. It can be
shown that the Rorder of convergence increases to
τ=½(3+√5) = 2.618 .
Numerical results are presented.
This is joint work with Hubert Schwetlick.

Christian Schröder
First Steps towards the Solution of the Palindromic Eigenvalue Problem
A palindromic eigenvalue problem is the generalized eigenvalue
problem for the pencil (A,B), where B is the transpose of A.
Thus reversing the order of A and B (and transposing) results
in the same problem. This explains the term palindromic.
This simple stucture is reason for a remarkable
symmetry property of the spectrum. Hence, the desire arises
to keep the structure. It is shown that this desire is not
easily fulfilled. Further some early results are presented
to circumvent those problems.

Constantin Popa
Approximate Orthogonalization for Discrete Regularization
of First Kind Integral Equations
In 1970 S. Kovarik proposed and analysed some
algorithms for approximate orthogonalization of finite subsets of
linearly independent vectors from a given Hilbert space. In the
present paper we present a modified Kovariklike algorithm for
numerical solution of arbitrary symmetric leastsquares
problems. Moreover, based on a singular value decomposition of the
problem matrix, we derive some regularization properties which we
apply in numerical solution of collocation discretization for some
classes of first kind integral equations.

Alastair Spence
Inexact Inverse Iteration: Error Analysis,
Preconditioning and Experiments
In this talk we discuss inexact inverse iteration for solving
the generalised eigenvalue problem. We present a
convergence theory based on Newton's method, and hence obtain
convergence rates for various versions of inexact inverse iteration.
We also discuss preconditioned iterative solves and show how
to ''tune'' the preconditioner to the eigenvalue problem.
An analysis of the effect of tuning is given for the symmetric case.
Numerical examples are given to illustrate the theoretical results.

Martin Stoll
Locking and Purging for the Hamiltonian Lanczos Method
In 1997 Benner and Fassbender presented an implicit restarted
symplectic Lanczos algorithm for the Hamiltonian eigenvalue
problem. We enhance this algorithm by a purging and locking
strategy in order to improve its convergence properties.
For that purpose a KrylovSchurlike method based on the
symplectic Lanczos process and the SR algorithm was developed.
We demonstrate its efficieny for several eigenproblems,
including quadratic eigenproblems with Hamiltonian spectral
symmetry.

Zdenek Strakos
Error Estimation in Preconditioned Conjugate Gradients
In practical problems, iterative methods
can hardly be used without some acceleration of convergence,
commonly called preconditioning, which is typically achieved
by incorporation of some (incomplete or modified) direct
algorithm as a part of the iteration.
Effectiveness of preconditioned iterative methods increases
with possibility of stopping the iteration when the desired
accuracy is reached. This requires, however, incorporating a
proper measure of achieved accuracy as a part of computation.
The goal of this talk, based on a paper coauthored with
Petr Tichy, is to describe a simple and numerically reliable
estimation of the size of the error in the preconditioned
conjugate gradient method.

Tomasz Szulc
Some Properties of Information Matrices Related to Optimal Designs
In the theory of experimental designs, the optimality of designs is
often considerd. Some optimality conditions are based on eigenvalues of the
information matrices of such designs. We are interested in determining the
structure of binary designs, which are Eoptimal over the class of designs
with the same number of treatments, blocks and experimental units (recall
that the design is Eoptimal if the smallest nonzero eigenvalue of its
information matrix is the largest value among all information matrices
of considered designs). In this context we study a class of nbyn integer
matrices with constant diagonal entries equal to 1, the offdiagonal
entries belonging to the set {1, 0, 1, ..., n1} and with zero row and
column sums. We identify matrices minimizing the spectral norm in the
set under consideration. Moreover, some spectral properties of Schur
complements in an information matrix are also discussed.

Elisabeth Ullmann
On Solving Large Linear Systems Arising in the Stochastic
Finite Element Method
The discretization of elliptic boundary value problems with
random data by the stochastic finite element method (SFEM)
requires the solution of a linear system in many unknowns.
Using appropriate stochastic shape functions, it is possible
to decouple this system into a sequence of independent lower
dimensional problems.
Recently de Sturler et al. proprosed two algorithms, a
modification of GCROT and GCRODR, that recycle Krylov
subspaces when solving a sequence of linear systems in order
to reduce the total cost of the solution process.
We present the results of numerical experiments when solving
such decoupled systems by means of Krylov subspace recycling.

Paul R. Willems
Bidiagonal SVD using Multiple Relatively Robust Representations
Based on socalled coupling relations developed by
Grosser and Lang we will show how the SVD of a real
bidiagonal matrix can be computed in a stable way and in
quadratic time.
To this end, concepts
from the MRRRalgorithm ("Holy Grail", Dhillon/Parlett) for the
tridiagonal symmetric eigenproblem are applied on the
NormalEquations and the GolubKahan form of the original
bidiagonal matrix.

Krystyna Ziętak
Approximation by Matrices with Prescribed Spectrum
In the talk we present the properties of the approximation
of a matrix by matrices whose spectrum is in a strip and we show how
Higham's algorithm, for computing a nearest symmetric positive
semidefinite matrix, can be adapted to solve this problem with respect
to the spectral norm. The behaviour of the algorithm is illustrated by
numerical tests.
This is joint talk with Beata Laszkiewicz.

