AMaMP
Audio Mixing and Manipulation Project
SourceForge.net Logo
 
 
 

AMaMP Core Overview
So what is the AMaMP Core, what can you do with it, and why would you want to use it? This page attempts to answer these questions.

A Cross-Platform Audio Engine
The AMaMP core is an audio engine designed to work accross a wide range of platforms. It works as a command line application, taking an instruction file and executing the instructions in it to manipulate audio data. Alternatively, it can be used as an engine for a frontend application, communicating with it by pipes. Equally, as the core is small, it could also be useful in embedded situations. The modular effects system also means it could be good in academic situations, giving students a platform for developing effects. The hope is that AMaMP will be useful for a wide range of audio-related tasks.

Some Examples Of Using The AMaMP Core
Here are some examples of what you can do with the core, along with the input files that you'd write to do it. The syntax is pretty straightforward. Comments start with a #. Indentation is not needed, but has been done to enhance readability. Finally, the syntax is case sensitive.

  • Convert a file between different formats (e.g. those supported by the core), sampling rates, bit-depths, etc.
    # The global chunk sets the sampling rate we work at internally.
    Global {
    BaseSamplingRate 44100
    }

    # Here comes our file input.
    FileInput in {
    Path "in.wav"
    Format WAVEPCM
    }

    # Here comes our file output. Let's make it low quality, mono.
    FileOutput out {
    Path "out.wav"
    Format WAVEPCM
    SamplingRate 22050
    Channels 1
    BitDepth 8
    }

    # Finally, we need to place the input sound at the start of
    # the file (position 0).
    Placement sound {
    Input in
    SampleOffset 0
    }
  • Take a number of audio files, place them at certain points in time and mix them down to a single file and play them to the speakers.
    # The global chunk sets the sampling rate we work at internally.
    Global {
    BaseSamplingRate 44100
    }

    # Let's take a couple of inputs.
    FileInput music {
    Path "greatsong.wav"
    Format WAVEPCM
    }
    FileInput voiceover {
    Path "mespeaking.wav"
    Format WAVEPCM
    }

    # Here comes our file output. The default output for
    # a WAVEPCM file is CD quality.
    FileOutput out {
    Path "out.wav"
    Format WAVEPCM
    }

    # We'll also play it to the speakers. We have to be
    # explicit about our sampling rate for this. The core
    # will automatically pick a device.
    StreamOutput speakers {
    Format WAVEPCM
    SamplingRate 44100
    Channels 2
    BitDepth 16
    }

    # Play the background music right at the start.
    Placement pmusic {
    Input music
    SampleOffset 0
    # We'll also half its volume.
    Volume 0.5
    }

    # Put the voiceover starting two seconds into the music.
    Placement pvoice {
    Input voiceover
    SampleOffset 88200
    }
  • Take an input wave file and generate two mono wave files - one for the left channel, one for the right channel. We'll also play it to the speakers.
    # The global chunk sets the sampling rate we work at internally.
    Global {
    BaseSamplingRate 44100
    }

    # Here comes our file input.
    FileInput in {
    Path "in.wav"
    Format WAVEPCM
    }

    # Outputs for the left and right channels.
    FileOutput left {
    Path "left.wav"
    Format WAVEPCM
    Channels 1
    }
    FileOutput right {
    Path "right.wav"
    Format WAVEPCM
    Channels 1
    }

    # Output for the speakers.
    StreamOutput speakers {
    Format WAVEPCM
    SamplingRate 44100
    Channels 2
    BitDepth 16
    }

    # This placement takes the input, pans it to the left and
    # sends it to the left output file and the speakers. Note
    # we have to specify the outputs explicitly this time, as
    # we don't want it to go to all of them (the default).
    Placement pleft {
    Input in
    SampleOffset 0
    Outputs left, speakers
    Pan -1
    }

    # This placement takes the input, pans it to the right and
    # sends it to the right output file and the speakers.
    Placement pright {
    Input in
    SampleOffset 0
    Outputs right, speakers
    Pan 1
    }

Here we have seen a range of the features that the AMaMP audio engine has to offer. Note that you could run this on Windows, then take it, with your input files, and run it on Linux and expect the same results.

Communication While Mixing: IPC
Once you've got the core mixing audio, you might want to interact with it, e.g. to stop the output, or add a new placement on the fly, or change the properties of an existing placement. The IPC (Inter-Process Communication) system provides for this. Working over pipes, two-way communication with the core is possible via a simple text based protocol. This relies upon the front end starting the core in such a way that pipes are created and connected to the core's stdin and stdout. A number of language bindings have been developed that do the dirty work here. At the time of writing, there are bindings for C, Perl and Visual Basic 5/6.

A Modular Effects System
Filters and effects are a key part of audio manipulation. The AMaMP core provides a modular effects system with a clean core-effect interface. On platforms where it's available, the build tools can compile each effect into a loadable module (e.g. plug-in style), meaning you are left with a small executable for the core and effects are loaded as they are needed, so you can add more later without needing to rebuild the core itself. On other platforms, or with a simple flag to tbe build system, effects are compiled directly into the core executable. A build tool handles all of the cross-platform building issues and leaves the developer to write the effect itself. The effects system has been designed with the future in mind, so that if more features are introduced to the effects system there won't be a need to re-write older effects.
For users, there are not a lot of effects available at the time of writing, for the effects system was still being completed. However, they will be documented as they appear.

Cross-Platform Capabilities
As of the 0.2 release, audio output was available on Windows and Linux, and working with PCM Wave files had been tested under cygwin, BSD, Mac OSX and Solaris (e.g. implying that it compiles in all of these places). Over the next couple of releases, we hope to have support for audio output on Mac OSX in place, and if possible BSD too. We also wish to support more audio formats on all platforms, including RAW, OGG and MP3 (MP3 has licensing issues that need to be considered before support is introduced).

To The Future
Release 0.3 is in a big way about the effects system. 0.4 will introduce a modular system for building I/O modules in a similar plug-in way, e.g. so you could use AMaMP to prototype a synthesis algorithm. See the Core page on the site for details on where we've got to so far.