1998 Discussion about 3D motion sound in live performance for independent artist
with Mathius Shadow-Sky +Charlie Richmond + Tim Bartoo + Eero Aro + Chris Rolfe + Richard Dobson + Tolve + Dave G. Malham + Matt J. Ingalls + Frederico Pereira
SPATIAL FREQUENCY ?
NO: SPATIALS TONES 
is this a case for a '3D watchdog service'?
I can only guess that demonstrations and specialized hardware/software mixing equipment for even the 2D variety of surround (for the above implementation) are not widely available.
You right, as a composer it's not easy to find a good system for 3D moving sound! Stockhausen did it in seventies at Osaka with I don't remember how many load speakers like Xenakis in his electronic music projects. But "le rendu (in French)" the result is not what our imagination expect. Since 1982 i'm working with 3D moving sound and at each time it's a "bricolage", i try to use what it exists combining different systems to get the effect. Time to time it works, time to time it doesn't. It depends too of the architecture's acoustics, and loudspeaker positioning. 3 years ago i found a hardware from Omnisound: the SSP 200 and i'm near to try it. The machine seems to be interesting as a 8x8 matrix for sound spatialization in live performance.
Our 16x16 AudioBox matrix for live sound spatialization has been recently reduced in price to US$4995 and the ABControl software for Mac is specifically designed to operate this functionality. Check it out and download demo software at: http://www.theatre-sound.com
For an independent performer like me US $ 4995 in addition Mac ABcontrol software for US $ 1800 is quite expensive.
Perhaps - but it is magnitudes cheaper than equivalent systems only a few years ago...
For my live performance if i am lucky i get no more than $ 400, normally they pay me $ 200! This year 99 no gigs! Experimental music is not easy to sell like popular music!
But AudioBox sound very interesting. I don't know if you saw my past performances on my web site and if the AudioBox can do it? With an additional technician? Myself i am busy to play several other instruments like synth, voice, sampler, and instruments pickup with contact mic like an architect lamp. So controling a software as a same time will make me like an acrobat!
I have some question about your AudioBox: could i write a precise rhythmical score to control the speed and duration of trajectories and change it during the performance? Like a musical instrument? Can the AudioBox variation speed be translated in tempo value to be read in a monitor?
We use MIDI Show Control commands not musical MIDI commands since MSC it an order of magnitude more precise and capable of controlling a very large number of parameters on a huge number of devices, as is typical in a large themed show... So tempo and rhythm have to be translated from 'musical' values to MSC values in order to work they way they are supposed to....
For Len Paul's Shuff diffusion, I used beat-precise crossfades down to 1/16 note (at 125 BPM). I meant to mention, also: I'll be testing gestural, live and algorithmic controls in Banff at the end of the month, as well as some tape-synching aids.
One of my project is to create the illusion of multiplied sound source from one -like very short delay effect- but dynamically in 3D space: several same sounds are flying in different directions and speed through the audience and can become one at the end of the trajectories and vice versa.
Many of the shows that use the AudioBox (and preceding systems we have made - computerised since 1985) do this sort of thing and that's why they use our systems ;-) (If I correctly understand your description...)
An other project is to generate "spatial frequency," the sound is moving so fast that it generates a frequency and these frequencies are worked as spatial chords and melodies. Replay by a MIDI keyboard or continuous MIDI controller.
Hmmm... I'd have to see the phyics on this... I think you'd have to alter the frequency during the 'motion to create this effect realistically since the loudspeaker is not actually moving...
I did this project (named Shadow-Sky-Teub-System in France) by multiplying four times the same sound (in four loudspeakers' system) in a sampler driven rhythmically by a MIDI sequencer controlling speed and duration (rhythm) of trajectories (the sound movement was discontinuous not continuous). But with 8 output in a sampler i could just use TWO SOUND at the same time! Two samplers take 16 channels of the mixing console with four sounds! Can the AudioBox resolve this problem by avoid to multiply sounds to get the "spatial frequency" ? Today i have a very small 16 channels 8 mono + 4 stereo and 4 bus mixing console for live performance (it doesn't sound great but it's practical!). One of my "bricolage"!
I'm not sure I fully undestand what you are doing, but it is capable of full and unrestriction matrixing of sound from 16 inputs to 16 outputs... This is more flexible than most mixing consoles for moving sound - especially since it can be programmed in an unlimited fashion...
How fast can your 16x16 AudioBox matrix spatialized sound?
Commands are timed down to 1/100th of a frame - the standard minimum timing interval defined in MSC... We also have special versions that are used in electronic music for granulation effects but that's another subject... ;-)
The basic answer is that gain levels can be changed from anything to anything every 30ms.
What i call "spatial frequency" is when i spatialize one sound faster than 1/16 second from one to an other loudspeaker. That generate tone you can control with tempo. The same sound spatialized at different tempi generate "spatial chords." The sensation is that you are "in" the sound!
I have been playing with full rotation of a b-format ambisonic soundfield and have had similar experiences as you. At certain speed the separate rotating sounds tend to form a "solid" new soundfield. I haven't had a possibility to try high speeds as I am using a mechanic rotation control an I am afraid to destroy the potentiometre.
Most probably an artifact of the method used, my guess would be some form of amplitude modulation (AM) caused by non-equal power panning or MIDI delays.
Amplitude panning should NOT change the frequency of the sound no matter the rate. Audio rate (~< 1/16 second) crossfading DOES produce a percept know as spatial tremolo (first noted and so named by James Dashow). There's no change in pitch, however.
It's not the panning who change the frequency of the sound. The frequency of the sound projected doesn't move, but the frequency of the movement creates a "new sound" because our perception cannot distinguish each sound isolated and creates a "fusion". Like in a film, beyond 24 images per second we see a continuum. I did it!
Dave G. Malham
This has been known for a long time - I reported this to an AES conference in 1987 when talking about my original (digitally controlled analog) Programable Soundfield Controller (tho' I don't think it got into the paper) but it was well known before that. When it was first noticed is an interesting point (which is why I asked for a reference for James Dashow) but it really could only be after electronic control systems became available. The system used for Disney's Fantasia probably wasn't fast enough (the distinction between movement and tonal modulation - "fusion" - blurs at around 20-30 radians/sec, that is between 7 and ten complete rotations around the listener per second, for circular movements) but the Potentiometre d'espace (1951) and the system used for Poeme Electronic (1958) might have been fast enough. I exlude Stockhausen's roating speaker system from the list since it would have produced Doppler modulation anyway.
"but the Potentiometre d'espace (1951)"
To disrupt sound localization? And who created Le Potentiomètre d'Espace?
"and the system used for Poeme Electronic(1958) might have been fast enough."
The Edgar Varèse Poème Electronique was performed at Bruxelles Universal Exhibition in Philips Pavillon design by the architect Le Corbusier and Iannis Xenakis. Edgar Varèse was speaking about "distribution spatiale" [...] "through 485 loudspeakers with 20 groups of amplifiers; [...] the music was recorded on three tracks' tape [...]" Loudspeakers were pile up in groups and that one calls "sounds road" to perform various effects: music that turns around the Pavillon, that come from different directions, reverberation phenomenon... etc". (This is my fast translation from French from his book: Edgar Varèse, Ecrits. 1983) But he never speaks about fast sound space motion to disrupt sound localization.
Somebody knows who was in charge to spatialize Le Poème Electronique in this project?
"I exlude Stockhausen's roating speaker system from the list since it would have produced Doppler modulation anyway."
I think that Stockhausen experienced something in fast spatial motion sound phenomenon (?). Like Varèse he has this great opportunity to "see" his music spatialized in the huge sphere Pavillon in Osaka Universal Exhibition.
I'd recommend synthesizing AM on the source track, and coordinating the tonal changes with the spatialization (rather than relying upon by-products of the diffusion system).
Can you explain your idea?
Your term "spatial frequency" had confused me. Not being able to hear what you're describing, I'm not sure now that you ARE getting amplitude modulation.
If you are, then I suspect it's caused by either the shape of the crossfade, or the relative timing of the fade-in and fade-out.
For example, a linear crossfade varies the amplitude during the fade (conceivably anywhere from -INF dB to +3 dB). That change, occuring at an audio rate, would cause AM synthesis.
Say you're crossfading a 400 Hz sine tone [f] using number 2) every 50 milliseconds (ie, modulating at 1/0.05, or, 20 Hz [m]). The result will contain the original 400 Hz tone, PLUS two sidebands at f+m and f-m, or 380 Hz and 420 Hz respectively. The amplitude of the sidebands would be lower (-6 dB, I think?).
I was merely suggesting you apply the AM ahead of time to the source material, and use a cleaner diffusion scheme (something with a reliable, equal power crossfade) in order to set the sidebands precisely.
It occurs to me just now, however, that there might be something more complex going on in the space itself owing to changes in room reflections from various speaker positions and/or slight differences in distance from each speaker. It could be decorrelating the original signal in a rough kind of way, and certainly creating a very complex AM process.
But don't ask me to explain that last part perceptually -- there's enough there for a research paper ;-) But what you're doing may be more difficult to simulate on source tracks than I had originally thought.
I'm not sure too! Amplitude Modulation for musician is a tremolo. I use the term "spatial frequency" to describe a phenomenon that describes something else than localization of sound in very fast 3D motion, and this is not a spatial amplitude modulation.
[We know that beyond 20 time per second approximately we are not able to locate any sound (in short distance). This phenomenon depends too of the distance between the listener and the sound source (340 meter per second).]
Like you said before "Amplitude panning should NOT change the frequency of the sound no matter the rate. Audio rate (~< 1/16 second) crossfading DOES produce a percept know as spatial tremolo (first noted and so named by James Dashow). There's no change in pitch, however."
But there is perhaps two types of hardware that make us confuse: the first is sound generator and the second is spatializator: VCA matrix. In the particular case of Shadow-Sky-Teub-System I am using as sound generator a sampler driven by a MIDI sequencer, each sound is repeated four or height times (in four or height loudspeaker system) and written rhythmically independently in each MIDI channel. Each same sound is already located in one loudspeaker. So when i make an acceleration of the tempo, it generates one sound in 3D space i call "spatial frequency" (the term is perhaps not appropriate) that you cannot locate: it's everywhere! What I thought is that VCA matrix can shape panning or crossfade as sound envelope and write in a MIDI sequencer the spatial rhythm of these panning-envelops to avoid multiplying the same sound in a sampler and mixing console AND to play with the subtleties of moving sounds, to find something in between? (not fixed sound one per loudspeaker). So perhaps my confusion is here: sound envelope and crossfade shape (or amplitude panning) that both are generated by VCA. In this case, my wish is to use VCA matrix as spatial envelope generator.
The experience with Ourdission first in 1983 was different: the project was to make a fast projection with continuous cross fade shape in big inflatable clear tube (audience place) of live playing flute. In this case you could recognize the sound flute (but in a way the flute player was in a front of the audience) but the global effect was something different (according also to the acoustic of the tube). Unfortunately there was no finance to develop Ourdission's project. One of Ourdission's goal was to generate stranger sounds through its spatial moving sound.
"Say you're crossfading a 400 Hz sine tone [f] using number 2) every 50 milliseconds (ie, modulating at 1/0.05, or, 20 Hz [m]). The result will contain the original 400 Hz tone, PLUS two sidebands at f+m and f-m, or 380 Hz and 420 Hz respectively. The amplitude of the sidebands would be lower (-6 dB, I think?)." What you suggest here is when you spatialized sound beyond 20Hz, it 3D generate FM synthesis. Do you experience it?
"It occurs to me just now, however, that there might be something more complex going on in the space itself owing to changes in room reflections from various speaker positions and/or slight differences in distance from each speaker. It could be decorrelating the original signal in a rough kind of way, and certainly creating a very complex AM process."
The result will be always "something else". this is my point!
I don't know if this is anything like the same thing, but the toy program 'abfpan' I have included in the CDP Multi-Channel Toolkit (from the CDP site below) will rotate a sound around four speakers (using basic Ambisonic horizontal encoding and decoding) at an arbitrary speed. Doing it with a steady vocal sample I have does create interesting modulation effects at high RPM (cue some amazing DSP analysis from - somebody... there must be a research grant opportunity here!). The resulting file can be played by any card supporting a four-channel WAVE device. If there really is an artistic application for this, I can of course develop the program much further! http://www.bath.ac.uk/~masjpf/CDP/CDP.htm
But unfortunately I'm running on PowerMac.
Matt J. Ingalls
I'm trying to get the ASIO developers SDK from Steinberg so can implement similiar Ambisonic editing tools for powermac+any asio supported multichannel card. if you know of a x-channel card that has a mac driver and free developer SDK then i would seriously consider buying the card myself and start working on these tools supporting that card. at the moment however resorting to stereo-only with "fielder" (sonomatics.com) next version will have rotation/width zoom/etc. editing features...
If I well remember to run the CDP Multi-Channel Toolkit I need a PC (PII?) and a Multi-Channel sound card like Creamweare Pulsar or Frontier. My first question is: it safe (without crashes) to run CDP M-C T in live performance? The one software I'm running in live performance is GRM Tools with its Döppler algorithm, but it works really nice just in slow motion (with long sounds). I experienced lot of problems using computer on stage! Just because (one example) of voltage variations, due to the power of light system, some computers don't support it! I'm really curious about your work and I would like to know more about it. Richard, I'm not a technician just an artist who knows about sound and everything but nothing about programming computer (I hate it)!
Running any computer in live performance can be risky, but with proper surge and interference protection, the computers themselves can usually survive if they are constantly watered and fed(:-)). The CDP tools are not real-time - they just read soundfiles and write new ones - they are provided mainly to promote the possibilities of multi-channel files, WAVE-EX, etc.
However, the algorithm for the quad panner is very simple (based on Dave Malham's Csound code), so you could in principle run a Csound quad panner in real-time - the worry would be Csound crashing, mainly!
Max/MSP is another possibility, on Macintoshes (widely used in public performances, so presumably are regarded as 'safe').
I am working on some modules for the Creamware PULSAR - it will be interesting to see if that passes the live-performance test.
What kind of "sound" do you use as your basic (source) sound? Is it a short or long sound? Is it a complex sound with more than one frequency? Is it just one frequency?
I'm using short percussive sounds as sound impulse. The spectral complexity of sound doesn't change the spectral structure of "spatial frequency," you just listen its "spatial tone" variations. My project (Shadow-Sky-Teub-System) works better with these percussive shapes to get my "spatial frequency". In this case of "spatial frequencies" effect, sound envelop is more important than sound spectrum. I never tried my fast 3D moving sound with long sounds. If you will, tell me about your experience. I know that in this research some important things about sound perception I will discover!
I use SuperCollider for the PowerMac (http://www.audiosynth.com) which is very stable - I'm very wary of using Max/MSP in a live situation (burnt fingers!).
SuperCollider has a built-in B-Format panner and I've built a simple decoder for it. I'll post it here if there are any interested SC users around.
Yes, but I heard that SuperCollider doesn't work with PowerMac 601? "Life is not easy!"
About 3D motion sound artists. The question is: does it exists a sound spatialization hardware for independent artist in balance with today gig budget?
The kind of budget you are talking about is very low. We wish we could do it for hundreds of dollars not thousands, but I guess that will come...
 The spatial frequency refers in quantum mechanics as the number of stacked waves in opposite of temporal frequency which is the frequency itself of a single wave.
back to the myster shadow-sky music space project
le LOIn [Laboratoire d'Ondes Inattendues Nomades]
the E-Motion Lab for:
MUSICAL INSTRUMENTS as BUILDINGS edifices as Sonic Architecture
the Scenophonic and Archisonic music Lab