The Runge-Kutta algorithm is *the* magic formula behind most of the physics simulations shown on this web site. The Runge-Kutta algorithm lets us solve a differential equation numerically (that is, approximately); it is known to be very accurate and well-behaved for a wide range of problems.

Consider the single variable problem
*x*' = *f* (*t*, *x*)

with initial condition *x*(0) = *x*_{0}. Suppose that *x*_{n} is the value of the variable at time *t*_{n}. The Runge-Kutta formula takes *x*_{n} and *t*_{n} and calculates an approximation for *x*_{n+1} at a brief time later, *t*_{n}+*h*. It uses a weighted average of approximated values of *f* (*t*, *x*) at several times within the interval (*t*_{n}, *t*_{n}+*h*). The formula is given by
*x*_{n+1} = *x*_{n} + ^{h}⁄_{6} (*a* + 2 *b* + 2 *c* + *d*)

where
*a* = *f* (*t*_{n}, *x*_{n})

*b* = *f* (*t*_{n} + ^{h}⁄_{2}, *x*_{n} + ^{h}⁄_{2} *a*)

*c* = *f* (*t*_{n} + ^{h}⁄_{2}, *x*_{n} + ^{h}⁄_{2} *b*)

*d* = *f* (*t*_{n} + *h*, *x*_{n} + *h* *c*)

To run the simulation, we start with *x*_{0} and find *x*_{1} using the formula above. Then we plug in *x*_{1} to find *x*_{2} and so on.

With multiple variables, the Runge-Kutta algorithm looks similar to the above equations, except that the variables become vectors.
## Multi-variable Runge-Kutta Algorithm

The Runge-Kutta Algorithm is fairly simple, but to describe it precisely we need to develop some notation. Suppose there are *m* variables *x*_{1}, *x*_{2}, ..., *x*_{m} each of which vary over time. For example, in the single spring simulation, *x*_{1} is position, *x*_{2} is velocity. Suppose further that there are *m* differential equations for these *m* variables
*x*_{1}' = *f*_{1}(*x*_{1}, *x*_{2}, ..., *x*_{m})

*x*_{2}' = *f*_{2}(*x*_{1}, *x*_{2}, ..., *x*_{m})

...

*x*_{m}' = *f*_{m}(*x*_{1}, *x*_{2}, ..., *x*_{m})

Notice there are no derivatives on the right hand side of any of those equations, and there are only first derivatives on the left hand side. These equations can be summarized in vector form as
x' = f (x)

where x = (*x*_{1}, *x*_{2}, ..., *x*_{m}) and
we allow some loose "vector of functions" concept where f = (*f*_{1}, *f*_{2}, ..., *f*_{m}). Next we label our time states x_{n}, x_{n+1} which are separated by time interval of length *h*. That is, x_{n} is the value of the *m* variables at time *t*_{n}. And *x*_{1,n} is the value of the first variable *x*_{1} at time *t*_{n}.
x_{n} = (*x*_{1,n}, *x*_{2,n}, ..., *x*_{m,n})

x_{n+1} = (*x*_{1,n+1}, *x*_{2,n+1}, ..., *x*_{m,n+1})

Suppose we have the state of the simulation at time *t*_{n} as x_{n}. To compute the state a short time *h* later and put the results into x_{n+1}, the Runge-Kutta algorithm does the following:
a_{n} = f(x_{n})

b_{n} = f(x_{n} + ^{h}⁄_{2} a_{n})

c_{n} = f(x_{n} + ^{h}⁄_{2} b_{n})

d_{n} = f(x_{n} + *h* c_{n})

x_{n+1} = x_{n} + ^{h}⁄_{6} (a_{n}
+ 2 b_{n} + 2 c_{n} + d_{n})

The new vector x_{n+1} gives you the state of the simulation after the small time h has elapsed. To spell out the above in more detail, we can drop the vector notation and write the Runge-Kutta algorithm like this:
*a*_{j, n} = *f*_{j}(*x*_{1,n}, *x*_{2,n}, . . . , *x*_{m,n})

*b*_{j, n} = *f*_{j}( (*x*_{1, n} + ^{h}⁄_{2} *a*_{1, n}), (*x*_{2, n} + ^{h}⁄_{2} *a*_{2, n}), . . . , (*x*_{m, n} + ^{h}⁄_{2} *a*_{m, n}) )

*c*_{j, n} = *f*_{j}( (*x*_{1,n} + ^{h}⁄_{2} *b*_{1,n}), (*x*_{2,n} + ^{h}⁄_{2} *b*_{2,n}), . . . , (*x*_{m,n} + ^{h}⁄_{2} *b*_{m,n}) )

*d*_{j, n} = *f*_{j}( (*x*_{1,n} + *h* *c*_{1,n}), (*x*_{2,n} + *h* *c*_{2,n}), . . . , (*x*_{m,n} + *h* *c*_{m,n}) )

*x*_{j, n+1} = *x*_{j, n} + ^{h}⁄_{6} (*a*_{j, n} + 2 *b*_{j, n} + 2 *c*_{j, n} + *d*_{j, n})

The above equations are applied for each variable *j*=(1, ..., *m*) to get the full set of variables in the vector x_{n+1}.
## Time As A Variable

Most of the simulations shown on this website *do not* have differential equations that depend explicitly on time. That is, you won't see the variable *t* on the right-hand side of the differential equations. One simulation that *does* depend on time is the chaotic driven pendulum because the driving force (which applies the twist to the pendulum) varies over time according to cos(*k* *t*).
When time appears explicitly in the differential equations we can add a time variable *t* to the state vector x. Suppose we assign this role to the variable *x*_{2}. This new variable has the extremely simple differential equation
*x*_{2}' = 1

That says that the rate of change of the variable *x*_{2} is a constant. Since we are taking derivatives with respect to time we can also write the above equation as

This integrates very easily to give *x*_{2} = *t*, which is what we wanted: time as a variable. Suppose that in the driven pendulum simulation we set up *x*_{2} in this way. Then the driving force is given by cos(*k* *x*_{2}).
You may ask: *Why have time as a variable? We already know the value of **t* at each time step! The Runge-Kutta algorithm works by averaging the predicted rates at various points in the time interval from *t* to *t*+*h*. Therefore, when the rates (differential equations) depend explictly on *t*, we also need to know the value of *t* at those points *within* the time interval. Putting time in as a variable makes for nicer cleaner computer code.
## Time Not As A Variable

If you want to, you can avoid keeping time as an additional variable. The following is an equivalent formulation of the Runge-Kutta algorithm where time *t* is passed in as a variable to each function in f.
a_{n} = f(*t*, x_{n})

b_{n} = f(*t* + ^{h}⁄_{2}, x_{n} + ^{h}⁄_{2} a_{n})

c_{n} = f(*t* + ^{h}⁄_{2}, x_{n} + ^{h}⁄_{2} b_{n})

d_{n} = f(*t* + *h*, x_{n} + *h* c_{n})

x_{n+1} = x_{n} + ^{h}⁄_{6} (a_{n}
+ 2 b_{n} + 2 c_{n} + d_{n})

This is completely equivalent to the formulation where time is kept as one of the variables in x. Whether you use this formulation or the earlier (cleaner) one is entirely up to you.
There is also computer program source code available on this website which includes an implementation of the Runge-Kutta algorithm. A simplified version of the source code is also available.
This web page was first published April 2001.