Ben Recht over at his substack, arg min:
Every engineer and scientist knows there is a fundamental difference between a “simulation” and a “prediction,” but what is the root of that distinction? At the highest level, we contrast simulation against black-box modeling. Simulations are typically thought of as “transparent boxes” where we can describe the intent of each part of the model that produces a forecast.
A roboticist might think of a simulation as a computer system designed to integrate the differential equations that define basic laws of physics. For example, you predict the path the airplane takes based on physical models of lift and drag and how the plane moves under different control settings. Simple simulations based on reduced equations might suffice for some tasks. For others, we might have to rely on computational fluid dynamics to truly capture the behavior we’re after.
The transparent box becomes murky when systems are too complex to predict precisely. Many designers accept adding randomness to their simulations, provided they can characterize the statistical models as plausible. The dynamics of coin flipping are too hard to capture precisely, but we’re usually fine with a random number generator that produces an even number of heads and tails. Noise in measurement devices often reliably has statistics that match those of Gaussian or Poisson random numbers, and such stochastic processes are reasonable stand-ins for the sorts of signals we’ll encounter in the wild. Maybe you can simulate elections based on random numbers derived from current polling results.
Where do we draw the line between sampling and simulation?
More here.
Enjoying the content on 3QD? Help keep us going by donating now.
