A Max 8 patch is created as a prototype for the idea mentioned earlier. It contains two aspects of distorting a live video using both live and recorded audio. Firstly, it changes the colour of the live video captured by your camera using
recorded audio.
The system diagram for this feature is as follows:
1. Record a 10-second clip, using record~, into the buffer by using buffer~, and jit.grab digitalise a video from the camera.
2. Perform FFTon
the data stored inside the buffer using a third-party extension vb.FFTWbuf~.
3. This transformed record is then sent to a javascript file which extracts the maximum, the mean and the minimum frequency values.
5. These
frequencies are scaled between 85 to 255 to match the RGB values of the colours.
6. The RGB values are translated into HSL value which the jit.pwindow takes as an input.
7. As a result, the video changes colour according
to the frequency of the recorded audio.
The second feature demonstrated in this patch is an addition to the prototype that was not discussed earlier. This is to extract the amplitude of live audio received from the microphone and rescale the size of the live video captured by
the camera. As mentioned previously, when a sound source is detected, it is usually in the time domain. Hence, extracting the amplitude does not require additional computation.
The system diagram for this feature is as follows:
1.
Ezdac~ is used to receive the live sound source.
2. Then, this audio is passed through cascade~ which is a user customisable biquad filters to clean the sound in his preference.
3. The peak amplitude of this sound is
extracted using peakamp~.
4. This value is then scaled up and added onto the original defined size of the video shown using jit.pwindow.
Finally, using crossover design from Max 8, the two features are combined, and the
output is demonstrated from the video above.