The audio engine receives a continuos stream of position data from the video rendering server: a list of the closest visible spaceports, a list of the closest visible spaceships, distance between virtual camera perspective and earth surface, time lapse factor and other data that relates to what is displayed and how.
All sounds are distributed in space via a six channel ring of speakers, where the sonic position of the sources matches the visual position of the spaceships and spaceports. The panning is done using ambisonics plus distance filtering and doppler effect. The effect of atmospheric turbulence that is highly audible on distant yet sounds is also modeled via a chorus / comb-filter combination.
concept / visual design: Christopher Bauder / white void
video rendering engine development: v4
sound engine development / sound design: Robert Henke