GuitarML FAQ

What is GuitarML?

GuitarML uses machine learning to model real amps and pedals. The GuitarML portfolio encompasses several guitar plugins that let you create and play amp/pedal captures in real time on your electric guitar.

How does the modelling process work?

  1. “Before” and “after” audio recordings are made using the target amplifier or pedal. There are several ways to accomplish this. You can use a signal/buffered splitter to record two tracks simultaneously:
(Image by Author)

How good do the captures sound compared to the real amp/pedal?

To explain this, I’ll use a couple of real world examples of pedal and amp captures. The target sound and the modelled sound will be compared using audio and plots of the signals. In each graph below, the purple line is the input guitar signal, the red is the target amp/pedal signal, and the green is the predicted signal from the trained model. Each graph shows approximately 8 milliseconds of audio.

Little Big Muff Distortion/Fuzz/Sustain pedal
Little Big Muff Pedal
Dumble ODS Clone (from kit)
Dumble ODS Clone (from kit)
Xotic SP Compressor Pedal
Xotic SP Compressor

What are some limitations?

  1. You can only capture “signal” based effects, as opposed to “time” based effects. Distortion, overdrive, and some compression can be captured because they have a more immediate effect on the guitar signal. Reverb, delay, chorus, flange can’t be captured because the signal is modulated over a longer period of time. These effects should be added separately to your signal chain.
  2. The capture can only be as good as the recorded audio samples. This is typically not an issue for pedals or devices that you can record with a direct output. However, any noise introduced in the recorded signal will also be captured. You can use a microphone when recording amp samples, which adds it’s own color to the sound. It also introduces speaker/cabinet dynamics, which may or may not be properly captured by the machine learning. The recommended recording method for amps is to use a load box to capture the direct output, or a line out if available. The resulting model can then be used with Impulse Responses to swap out different cab/mic combinations.

Why aren’t all models compatible with every plugin?

The GuitarML plugins were originally experiments into using machine learning for modelling amps/pedals. Several different neural net models were tested in order to find the best one. Even though every model uses the JSON format, the structure of the data is different for each model, and therefore not compatible with models of different types.

  1. Proteus, EpochAmp, NeuralPi, Chameleon, TS-M1N3: stateful LSTM
  2. SmartAmp and SmartPedal: WaveNet
  3. SmartAmpPro (Not officially released and no longer in development, but available on Github): Combination of 1-D Convolution and stateless LSTM
Model trade study (higher score is better)

The training code throws an error when trying to read my sample input/output wav files.

The wav files have to be in a specific format before using it with any of the training codebases. These are the requirements:

  1. Must be 44.1kHz samplerate.
  2. Must be Mono (as opposed to stereo).
  3. Can’t have extra metadata, such as tempo information, which is sometimes added automatically by your DAW.
  4. The new Colab Script code used with Proteus and SmartPedal can accept PCM16 or FP32, but the old code only accepts FP32.

What is Google Colab and how can I use it to create GuitarML models?

Google Colab is a free python environment in the cloud, where you can run code through a web browser. This environment includes both Tensorflow and Pytorch (the frameworks used to do the machine learning part). This eliminates the need for downloading and installing all the dependencies on your local computer. It requires a Google account to use. After recording your input/output sample wav files, follow these steps:

  1. Download the appropriate Capture Utility zip file (Proteus or SmartPedal) from GuitarML.com. This contains an input wav file and a colab script specific for either the Proteus or SmartPedal plugins.
  2. Go to the Colab Website.
  3. Click “File” and “Upload Notebook”, and upload the Colab script downloaded from GitHub.
  4. Switch to the GPU runtime to train using GPUs (much faster than CPU runtime). This is done from the “Runtime” dropdown menu at the top of the webpage.
  5. Upload your two sample input/output wav files. You may need to run the “!git clone …” section of code first, in order to create the project directory. The wav files can then be uploaded to the project folder in the left hand menu. When using the Capture Utility colab scripts, follow the instructions at the top of the script after loading into the Colab website.
  6. Run each block of code by clicking the small “play” icons for each section in the main view. Follow any commented instructions (green font) before running each block of code.
  7. When you run the actual training, you should see an output of it’s progress.
  8. When training is complete, you can run the last block of code to create plots for comparing the target signal and the newly created model.

Plugin Specific Questions

Why does SmartAmp/SmartPedal sound glitchy or crackly?

  1. Train models with less parameters (less layers/channels). This will reduce the accuracy, but will run faster in real-time.
  2. Plug in your laptop and set the system performance to the highest setting.
  3. Set your DAW thread priority to the highest setting.
  4. Increase your buffersize in your audio device settings.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store