Abstract: We can, and should, do statistical inference on simulation models by adjusting the simulation parameters so that the values of _randomly chosen_ functions of the simulation output match the values of those same functions calculated on the data. Results from the "state-space reconstruction" or "geometry from a time series" literature in nonlinear dynamics indicate that just $2d+1$ such functions will typically suffice to identify a model with a $d$-dimensional parameter space. Results from the "random features" literature in machine learning suggest that using random functions of the data can be an efficient replacement for using optimal functions. In this talk, I'll sketch out the argument, and show off some numerical results where the new method works well on particular examples. With this approach, we can make simulation-based inference much more nearly automatic than it has been, at little or no cost in statistical efficiency.