Collins Conference Room
Working Group

All day

 

Our campus is closed to the public for this event.

“Blackbox optimization” refers to the general problem of how to optimize a quantity whose behavior can only be sampled rather than evaluated in closed form. Perhaps the most famous algorithms for blackbox optimization are genetic algorithms and simulated annealing.

Often however one has several samplers to choose from, which vary in both their accuracy (as models of the system ultimately of interest) and their cost. One example is where the samplers are different computational simulators of the climate’s dynamics, with varying accuracy and varying cost (in terms of how fast run), and the goal is to optimally use those simulators to find the climate parameters that give the best fit to observational data, subject to a penalty on total cost incurred. A similar problem is how best to use a set of cosmology evolution simulators with varying accuracy and run-time, with the goal being to find the cosmological parameters that give the best fit to the observed baryonic matter power spectrum, subject to constraints on total run-time.

Other kinds of MSO applications involve finding optimal designs of engineered systems. Examples include how best to use a set of simulators of an exascale computer to find the optimal architecture of such a computer, and how best to a set of condensed matter simulators together with laboratory experiments to find the material that optimizes some desired physical properties of the material.

The central challenge of all such multi-sampler optimization problems is to select which next sampler(s) to call and with what inputs at each successive step of the optimization. Factors to consider are the trade-off of the costs and accuracies of the samplers, and the need to learn the statistical relationship of the simulators both with one another and with the objective function ultimately of interest. MSO is related to existing work on multi-fidelity optimization, multi-disciplinary optimization, active learning, semi-supervised machine learning, adaptive experimental design, and several other bodies of work. However it extends substantially beyond any of them.

This working group will be a gathering of researchers to discuss approaches to multi-sampler optimization and plan potential collaborations for joint work on it.

SFI Host: 
David Wolpert