Behavioral abstraction is the ability of functions to adapt their behavior to the transformation environment. This environment may contain certain abstract notions, such as loudness, stretching a sound in time, etc. These notions will mean different things to different functions. For example, an oscillator should produce more periods of oscillation in order to stretch its output. An envelope, on the other hand, might only change the duration of the sustain portion of the envelope in order to stretch. Stretching a sample could mean resampling it to change its duration by the appropriate amount.
Thus, transformations in Nyquist are not simply operations on signals. For example, if I want to stretch a note, it does not make sense to compute the note first and then stretch the signal. Doing so would cause a drop in the pitch. Instead, a transformation modifies the transformation environment in which the note is computed. Think of transformations as making requests to functions. It is up to the function to carry out the request. Since the function is always in complete control, it is possible to perform transformations with "intelligence;" that is, the function can perform an appropriate transformation, such as maintaining the desired pitch and stretching only phase 3 of an envelope to obtain a longer note.
The transformation environment consists of the following elements. Although
each element has a "standard interpretation," the designer of an
instrument or the composer of a complex behavior is free to interpret the
environment in any way. For example, a change in
One way to understand this in detail is to imagine how it
might be executed: first,
Notice that the semantics of
In the Nyquist implementation, audio samples are only computed when they are
needed, and the second part of the
A language detail: Even though Nyquist defers evaluation of the second part of the
The operational rule for
; play it
;
(play a-snd)
One
might then be tempted to write the following:
How then do we obtain a sequence of two sounds properly?
What we really need here is a
behavior that transforms a given sound according to the current
transformation environment. That job is performed by
The lesson here is very important: sounds are not behaviors! Behaviors
are computations that generate sounds according to the transformation
environment. Once a sound has been generated, it can be stored, copied,
added to other sounds, and used in many other operations, but sounds are
not subject to transformations. To transform a sound, use
This also explains why sounds need to be
Transformations can also be applied to groups of behaviors:
(play (snds 0.1))
(play (loud 0.25 (stretch 0.9 (snds 0.3))))
Section "Transformations" describes the full set of transformations.
As with other components of the environment, you should never change
The Environment
The transformation environment consists of a set of special Lisp variables.
These variables should not be read directly and should never be set
directly by the programmer. Instead, there are functions to read them, and
they are automatically set and restored by
transformation operators, which will be described below.
*loud*
may change
timbre more than amplitude, and *transpose*
may be ignored by
percussion instruments:
*warp*
*warp*
is
interpreted as a function from logical (local score) time to physical
(global real) time. Do not access *warp*
directly. Instead, use
(local-to-global t)
to
convert from a logical (local) time to physical (global) time. Most often,
you will call (local-to-global 0)
. Several transformation operators
operate on *warp*
, including at
, stretch
, and
warp
.
*loud*
*loud*
directly. Instead, use (get-loud)
to get the current value of
*loud*
and either loud
or loud-abs
to modify it.
*transpose*
*transpose*
directly.
Instead, use (get-transpose)
to get the current value of
*transpose*
and either transpose
or transpose-abs
to
modify it.
*sustain*
*sustain*
of 0.5, while very
legato playing might be expressed with a *sustain*
of 1.2.
Specifically, *sustain*
stretches the duration of notes (sustain)
without affecting the inter-onset time (the rhythm). Do not access
*sustain*
directly. Instead, use (get-sustain)
to get the
current value of *sustain*
and either sustain
or
sustain-abs
to modify it.
*start*
*start*
has a
precise interpretation: no sound should be generated before *start*
.
This is implemented in all the low-level sound functions, so it can
generally be ignored. You can read *start*
directly, but use
extract
or extract-abs
to modify it. Note 2: Due to some internal confusion between the specified starting time and the actual starting time of a signal after clipping, *start*
is not fully implemented.
*stop*
*start*
, no sound should be generated after this time.
*start*
and
*stop*
allow a composer to preview a small section of a work without computing it from beginning to end. You can read *stop*
directly, but use extract
or extract-abs
to modify it. Note: Due to some internal confusion between the specified starting time and the actual starting time of a signal after clipping, *stop*
is not fully implemented.
*control-srate*
*control-srate*
directly, but use control-srate
or control-srate-abs
to modify it.
*sound-srate*
*sound-srate*
directly, but use sound-srate
or sound-srate-abs
to modify it.
Sequential Behavior
Previous examples have shown the use of seq
, the sequential behavior
operator. We can now explain seq
in terms of transformations.
Consider the simple expression:
(play (seq (note c4 q) (note d4 i)))
The idea is to create the first note at time 0, and to start the next
note when the first one finishes. This is all accomplished by manipulating
the environment. In particular, *warp*
is modified so that what is
locally time 0 for the second note is transformed, or warped, to the logical
stop time of the first note.
*warp*
is set to an initial value that has no
effect on time, and (note c4 q)
is evaluated. A sound is returned and
saved. The sound has an ending time, which in this case will be 1.0
because the duration q
is 1.0
. This ending time, 1.0
,
is used to construct a new *warp*
that has the effect of shifting
time by 1.0. The second note is evaluated, and will start
at time 1. The sound that is
returned is now added to the first sound to form a composite sound, whose
duration will be 2.0
. *warp*
is restored to its initial value.
seq
can be expressed in terms of
transformations. To generalize, the operational rule for seq
is:
evaluate the first behavior according to the current *warp*
.
Evaluate each successive behavior with *warp*
modified to shift the
new note's starting time to the ending time of the previous behavior.
Restore *warp*
to its original value and return a sound which is the
sum of the results.
seq
is not evaluated until the
ending time (called the logical stop time) of the first part. It is still
the case that when the second part is evaluated, it will see *warp*
bound to the ending time of the first part.
seq
, the expression can reference variables according to ordinary Lisp scope rules. This is because the seq
captures the expression in a closure, which retains all of the variable bindings.
Simultaneous Behavior
Another operator is sim
, which invokes multiple behaviors at the same
time. For example,
(play (scale 0.5 (sim (note c4 q) (note d4 i))))
will play both notes starting at the same time.
sim
is: evaluate each behavior at the
current *warp*
and return the result. The following section
illustrates two concepts: first, a sound is not a
behavior, and second, the sim
operator and and the at
transformation can be used to place sounds in time.
Sounds vs. Behaviors
The following example loads a sound from a file in the current directory and stores it in a-snd
:
; load a sound
;
(setf a-snd (s-read "./demo-snd.snd" :srate 22050.0))
(seq a-snd a-snd) ;WRONG!
Why is this wrong? Recall
that seq
works by modifying *warp*
, not by operating on
sounds. So, seq
will proceed by evaluating a-snd
with
different values of *warp*
. However, the result of evaluating
a-snd
(a Lisp variable) is always the same sound, regardless of the
environment; in this case, the second a-snd
should start at time
0.0
, just like the first. In this case, after the first sound ends,
Nyquist is unable to "back up" to time zero, so in fact, this will
play two sounds in sequence, but that is a result of an implementation
detail rather than correct program execution. In fact, a future version of
Nyquist might (correctly) stop and report an error when it detects that the
second sound in the sequence has a real start time that is before the
requested one.
cue
. For
example, the following will behave as expected, producing a sequence of two
sounds:
(seq (cue a-snd) (cue a-snd))
This example is correct because the second expression will shift the sound
stored in a-snd
to start at the end time of the first expression.
cue
,
sound
, or control
. The differences between these operations
are discussed later. For now, here is a "cue sheet" style score that
plays 4 copies of a-snd
:
; use sim and at to place sounds in time
;
(play (sim (at 0.0 (cue a-snd))
(at 0.7 (cue a-snd))
(at 1.0 (cue a-snd))
(at 1.2 (cue a-snd))))
The At Transformation
The second concept introduced by the previous example is the at
operation, which shifts the *warp*
component of the environment. For
example,
(at 0.7 (cue a-snd))
can be explained operationally as follows: modify *warp*
by shifting
it by 0.7
and evaluate (cue a-snd)
. Return the resulting sound
after restoring *warp*
to its original value. Notice how at
is used inside a sim
construct to locate copies of a-snd
in
time. This is the standard way to represent a note-list or a cue-sheet in
Nyquist.
cue
'd in order to be shifted
in time or arranged in sequence. If this were not the case, then sim
would take all of its parameters (a set of sounds) and line them up to start
at the same time. But (at 0.7 (cue a-snd))
is just a sound, so
sim
would "undo" the effect of at
, making all of the sounds
in the previous example start simultaneously, in spite of the at
.
Since sim
respects the intrinsic starting times of sounds, a special
operation, cue
, is needed to create a new sound with a new starting
time.
Nested Transformations
Transformations can be combined using nested expressions. For example,
(sim (cue a-snd)
(loud 6.0 (at 3.0 (cue a-snd))))
scales the amplitude as well as shifts the second entrance of a-snd
.
(loud 6.0 (sim (at 0.0 (cue a-snd))
(at 0.7 (cue a-snd))))
Defining Behaviors
Groups of behaviors can be named using defun
(we already saw this
in the definitions of note
and note-env
). Here is another example
of a behavior definition and its use. The definition has one parameter:
(defun snds (dly)
(sim (at 0.0 (cue a-snd))
(at 0.7 (cue a-snd))
(at 1.0 (cue a-snd))
(at (+ 1.2 dly) (cue a-snd))))
In the last line, snds
is transformed: the transformations will apply
to the cue
behaviors within snds
. The loud
transformation will scale the sounds by 0.25
, and stretch
will
apply to the shift (at
) amounts 0.0
, 0.7
, 1.0
,
and (+ 1.2 dly)
. The sounds themselves (copies of a-snd
) will
not be stretched because cue
never stretches sounds.
Sample Rates
The global environment contains *sound-srate*
and
*control-srate*
, which determine the sample rates of sounds and
control signals. These can be overridden at any point by the
transformations sound-srate-abs
and control-srate-abs
; for
example,
(sound-srate-abs 44100.0 (osc c4))
will compute a tone using a 44.1Khz sample rate.
*sound-srate*
or *control-srate*
directly with setf
or
even let
. The global environment is determined by two additional
variables: *default-sound-srate*
and *default-control-srate*
.
You can add lines like the following to your init.lsp
file to change
the default global environment:
(setf *default-sound-srate* 44100.0)
(setf *default-control-srate* 1102.5)
If you have already started Nyquist and want to change the defaults, the
following functions should be used:
(set-control-srate 1102.5)
(set-sound-srate 22050.0)
These modify the default values and reinitialize the Nyquist environment.
Previous Section | Next Section | Table of Contents | Index | Title Page