by Rishidev Chaudhuri
A dynamical system is a mathematical description of how some particular system evolves through time. These are often physical or biological systems, and dynamical systems are used to model everything from how the planets move around the sun and how earthquakes propagate to how neurons fire and how economies evolve over time. To build a dynamical system we need two things. We need some sort of abstract description of the properties of the system that will be changing; for example, if we want to understand the movements of the planets these would be their positions in space (relative to some coordinate system). We also need some rule for how these quantities evolve from moment to moment. Here, this would be a set of equations describing how the gravitational attraction between the planets causes them to accelerate in various directions.
As a simple example, imagine we are studying a group of immortal rabbits. Each year, 10% of them reproduce. If we started with 100 rabbits, in the second year we'd have 110, in the third year we'd have 121, and so on. We'd start having fractional amounts of rabbit very soon, but let's ignore that. Let's call x(n) the number of rabbits in year n. Then we can explicitly write out the rule as:
x(n+1)=1.1*x(n).
Again, this just says, “To go from year n to year n+1 take x(n) (the number of rabbits in year n) and multiply by 1.1, which is the same as adding 10%.” The rule tells us how to go from one state in time to the next one. If we started with 100 rabbits and kept applying this rule we'd get a sequence: {100, 110, 121, …}. This describes a trajectory of the system with time. “Trajectory” sounds like it should describe a path through space, and that's where the intuition comes from (see this article, for example). If we start with a different number of rabbits, we get a different trajectory (for example, starting with 90 rabbits gives us 99 rabbits at the end of the first year).
Say we start a system off in one state (here, state means number of rabbits), and wait a certain amount of time. Where will it be? One approach is to just replicate what the system would do. If we want to know what it'll do after 10 steps of time, we write down the description of where it is, apply the rule to it to get the new description, and do this ten times. This can take a long time, especially if we have many time steps. More importantly, it doesn't seem to tell us anything about the deeper structure of the system. For example, how would things change if we started the system off in a different state. Do all states end up in a similar place or do they differ wildly? If we see that the state is headed in one direction, will it keep going in that direction? In some cases, we can solve the system to get a formula that just tells us how many rabbits we'd have given a starting number and a length of time. This is typically impossible; simple update rules can give rise to systems that have no such formula. But this system does have one, and the formula is
Here x(0) is the starting value. Note that this formula doesn't need to be repeatedly applied; we can just put in the number of years and the starting value. Of course, given the simplicity of the update rule, applying the formula is not too much simpler than applying the update rule. The formula tells us that all of the trajectories of this system look qualitatively similar. The number of rabbits we have keeps growing and never stops, unless we started with no rabbits, in which case we'll always have no rabbits (within the assumptions of the model, of course, so we don't get to go out and buy some).