Home Contact Publications CV Thesis Talks Teaching Research Links


David Albers

Max Planck Institute for Mathematics in the Sciences
Inselstrasse 22
D-04103 Leipzig
Germany

Phone: ++49 (0) 341 9959 536
Fax: ++49 (0) 341 9959 555
Office: A 09

Email: albers(at)mis.mpg.de

My research has two particular areas of focus, dynamical systems and learning theory; very brief summaries of these interests are listed below. A more comprehensive collection of current research can be found here (pdf version); and even more detailed (and speculative) research program is located here. Both a list of publications and my CV can be found at the respective hyperlinks in this sentence and at the top of this page.

I have formally (= held a job at) been associated with the following institutions: University of Wisconsin - Madison (physics and mathematics), the Santa Fe Institute, the Center for Computational Science and Engineering at the University of California - Davis, and the Max Planck Institute for Mathematics in the Sciences.

Areas of interest: dynamical systems (both abstract and computational); learning theory; game theory; ergodic theory; global analysis; random matrix theory; mathematical finance; time-series analysis; computational ecology; cellular automata; random dynamical systems; computational differential geometry; information geometry..

Dynamical systems:

My work with dynamical systems consists largely of statistical studies of the qualitative behavior of high dimensional dynamical systems. In particular, I am interested in understanding the geometric mechanisms that yield persistent, chaotic dynamics. Such an understanding requires insight both into the geometric difference between high and low-entropy high-dimensional dynamical systems and into the transitions between the zero, low, and high-entropy regimes. Thus, I am focusing on several specific topics of research: the genericity of the Ruelle-Takens route to chaos; observation in numerical data and consequent implications concerning the stability conjecture of Palis and Smale; observation of characteristics of the more modern version of the stability conjecture, the Pugh-Shub conjecture; and the persistence of chaotic dynamics in relation to parameter variation, including addressing the existence of Milnor attractors and the windows conjecture of Barretto. The set of mappings I use for many of the above investigations consist of scalar, feedforward neural networks.

Learning theory:

My original interest in learning theory stems from several applied problems in dynamical systems (e.g., attractor and dynamical reconstruction), as well as working towards a geometric and dynamical understanding of learning algorithms. To this end I am studying, in a game theoretic framework, the different dynamics, information capacities, and success of different learning schemes. Examples of learning schemes under consideration are reinforcement learning, simulated annealing, neural networks, and epsilon machine learning. There are many problems that I intend to address using the aforementioned framework; a partial list includes: ecology of different agents with different learning schemes in a multi-agent dynamical system, a geometric understanding of the dynamics of these multi-agent dynamical systems, and a more fundamental understanding of how information is held and processed within various learning schemes. This approach to learning theory is addressed within a multi-agent, complex systems construction.

Here is some general info for my sake and yours: Numerical Lyapunov Exponent Calculation, Computation and Debugging resources