The current version [ January 2014 ] of the Lumière live performance is not only the result of filling up patterns in a existing structure and performing with these patterns, but rather the development of a framework that allows me to create these audiovisual patterns at all. The framework has been developped in fall 2013 and got a massive rewrite after the frist performance in October 2013. What I perform is an ongoing exploration of the capabilities of my framework, and I plan to expand it step by step in the future. The framework provides the equivalent to a language, a set of signs I can use to form sentences, sequences that convey meaning and are more than just a random succession of shapes. However, this also implies learning the language and figuring out what I can express with it. Technically I am going to extend and rewrite the patterns, diving deeper into what is possible with the current software version. Also, after a view more Lumière performances, I plan to add more features to the software. This will allow me to create visual shapes and sonic results that I cannot achieve with the current version., or extend the real time control over theses shapes. The following text provides an overview over the basic structure and technical concept behind the Lumiere project.


I wanted to be able to create fast successions of very minimalist but precise shapes with the lasers and I also had a desire to create shapes that make use of random functions, aleatoric elements, complex movements. A precise sync with musical events and visual elements was desired as well as the option to treat sound and image independently. Since the framework was meant to provide a useful interface for concerts, the ability to interact with the structure and the sounds and shapes in real time was also an important development goal.


The lasers are controlled with analog signals, created in MaxMSP. These signals are technically the same type of signals used for audio, but they serve a different purpose and are usually not intended to be made audible. They rather control the movement of extremely fast and precise mirrors and the intensity of the laser beams and by doing so allow me to draw lines, circles, complex morphing shapes or even text and numbers.


The heart of the project is the "LumiereLaserPatternGenerator" (LPG) software which consists of a series of basic shape generators. They can be seen as shape synthesizers, with parameters that can be set to define their exact behavior. Additionally there are shape modifiers, objects that on a global scale, allow me to do geometric transformations or manipulations of the intensity of the laser beam for additional complexity. A simple example would be the 'zoom' feature that allows me to change the size of a shape. Various aspects of the shape synthesis can be controlled in real time , for instances the speed of movements. This allows to articulate the shapes in a musical fashion.

The LPG also contains all sorts of setup and alignment tools which are essential to combine the three lasers I am using into one coherent image, and adapt to different venues and projection scenarios.

Some of the unit generators are quite basic, some are highly advanced. Currently the most complex module is the text generator. I wanted to be able to use fonts as graphic elements that can be transformed in several ways, or simply used to display text nicely. I developed my own optimized vector font for this purpose, including a description language that defines how each letter is drawn. This description language allows me to add more characters later by simply adding more lines in a text file.

The LPG acts like a synthesizer, where each incoming 'note' contains the complete set of data needed to draw a specific shape. In order to make the programming of shapes easier and faster, it is only necessary to tell the LPG which parameters are not at their default values. Parameters are sent to the LPG via OSC commands and can be very basic for simple shapes. Sending 'laser 1 circle' is all it needs to draw a medium sized circle, whilst 'laser 2 spark .3 .0 2000 text count 7 clock 4 scan 1 0 3 flip 0 0 2 ratio 1 1 .4 .1 20000 .9' does something much more dynamic and complex.


To drive the three lasers I need two computers, located near the lasers, connected to the audio computer on stage via Ethernet. A OSC handshaking connection makes sure the lasers only operate when they get valid data from the audio computer. This is one of several layers of safety build in the system. The lasers can easily set things on fire or permanently damage eye sight if a steady beam with full intensity is emitted. The software controls the laser states and also ensures that e.g. wrong system commands ('zoom 1000') are captured safely.

The LPC only acts as the visual synthesis engine, it does not store any 'presets', apart from the adjustment and setup info for each laser.

The actual visual data is stored in a Max4Live device 'LaserControl' on the audio computer. This device allows to assign each incoming MIDI note a visual shape by typing in the parameters for the LPC. some LPC generators also respond to MIDI notes from other MIDI channels. This makes it possible to e.g control the intensity or position of shapes in sync with audio events.

During the performance, the Live set is controlled mainly via three hardware controllers: A Launchpad with a modified 'Session View' script which allows to jump in blocks between the different parts and has different and more dimmed LED colors to work best in the darkness, a combination of Livid Elements modules for control of volume, sends and synthesis parameters and a Doepfer fader box for EQ and effect control. This setup makes it possible to play a highly improvised show without much need for touching the mouse / touchpad of the computer.

I wrote several special Max4Live devices for the Lumière project that expand the capabilities of Live to match my desires: MIDI routing devices that send MIDI from several tracks to other tracks, devices that allow to switch between several synthesizers in a Live set, a convolution reverb with switchable impulse responses, another reverb which is a mix of convolution and algorithmic reverb, and the 'laser sound engine' a dedicated synthesizer inspired by the control signals for the lasers.


The sonic side went through several conceptual iterations: At the beginning I planned to completely decouple the sound and the visual parts and write quite conventional music for the performance. Than I started incorporating the laser control signals as part of the sonic palette but experienced several conceptual and technical difficulties. The sound aesthetics of the laser control signals did not match with the other musical elements and the timing was inacceptable due to the fact that data had to be sent via ethernet to a remote computer. For Lumiere Version 1.5, created in January 2014, I built a dedicated M4L synthesizer that acts very similar to the laser pattern generators but resides inside the audio computer without the need for an external ethernet connection and with the freedom to tweak the sound synthesis independently from improvements of the visual engine whilst still having a strong sense of coherence. As a direct result, I reduced the more common musical elements to a bare minimum, mostly just some sort of musical backbone consisting of a bassdrum and some higher noisy percussions and mainly use the laser sound generator. Textuaral elements are created by granulation and 'freezing' of the laser sound generator output.

The whole setup works now as one consistent audiovisual engine, built for real time interaction based on audiovisual patterns with the capability of subtile to strong transformation of those patterns.