GuitarML FAQ

What is GuitarML?

How does the modelling process work?

  1. “Before” and “after” audio recordings are made using the target amplifier or pedal. There are several ways to accomplish this. You can use a signal/buffered splitter to record two tracks simultaneously:
(Image by Author)

How good do the captures sound compared to the real amp/pedal?

Little Big Muff Distortion/Fuzz/Sustain pedal
Little Big Muff Pedal
Dumble ODS Clone (from kit)
Dumble ODS Clone (from kit)
Xotic SP Compressor Pedal
Xotic SP Compressor

What are some limitations?

  1. Currently, only 44.1kHz input audio produces ideal sound through the plugins. The model is optimized at the samplerate used in the input/output recordings. Using 44.1kHz (instead of 48kHz or higher) improves training time and real-time performance, and maintains a minimum of CD-quality audio. You can set your interface to 48kHz, but the sound will be distorted.
    UPDATE (12/6/21): The TS-M1N3 (TS-9 clone) has a samplerate converter implemented, and will be used in the next release and future plugins.
  2. Currently, only single snapshots of a pedal/amp sound can be captured at a time. Where a real amp has multiple continuous settings (gain knob, EQ knobs, etc.), each capture can only reproduce the sound of a specific combination of settings. (for example, gain at 7, bass at 5, treble at 6).
    UPDATE (12/6/21): Conditioned models have been implemented in NeuralPi and the TS-M1N3 plugins, which use a machine learning to model the full range of one or more knobs.
  3. You can only capture “signal” based effects, as opposed to “time” based effects. Distortion, overdrive, and most compression can be captured because they have a more immediate effect on the guitar signal. Reverb, delay, chorus, flange can’t be captured because the signal is modulated over time. These effects should be added separately to your signal chain.
  4. The capture can only be as good as the recorded audio samples. This is typically not an issue for pedals or devices that you can record with a direct output. However, any noise introduced in the recorded signal will also be captured. For most amplifiers, you will need to use a microphone, which adds it’s own color to the sound. It also introduces speaker/cabinet dynamics, which may or may not be properly captured by the machine learning. These small differences modify the modelled sound from the “true” sound of an amplifier. However, if the recordings are done properly, this effect is negligible for capturing the overall sound of the amp.

Why aren’t all models compatible with every plugin?

  1. SmartAmp and SmartPedal: WaveNet
  2. SmartAmpPro: Combination of 1-D Convolution and stateless LSTM
  3. NeuralPi, Chameleon, TS-M1N3: stateful LSTM
Model trade study (higher score is better)

The training code throws an error when trying to read my sample input/output wav files.

  1. Must be in 32FP (32 bit floating point).
  2. Must be 44.1kHz samplerate.
  3. Must be Mono (as opposed to stereo).
  4. Can’t have extra metadata, such as tempo information, which is sometimes added automatically by your DAW.

What is Google Colab and how can I use it to create GuitarML models?

  1. Download the appropriate .ipynb notebook for Colab from the GuitarML Github repository. For training NeuralPi/Chameleon models (the most advanced GuitarML models) go here. From this Github page, right click the “Raw” button and “Save link as..” to download the Colab script.
  2. Go to the Colab Website.
  3. Click “File” and “Upload Notebook”, and upload the Colab script downloaded from GitHub.
  4. Switch to the GPU runtime to train using GPUs (much faster than CPU runtime). This is done from the “Runtime” dropdown menu at the top of the webpage.
  5. Upload your two sample input/output wav files. You may need to run the “!git clone …” section of code first, in order to create the project directory. The wav files can then be uploaded to the project folder in the left hand menu.
  6. Run each block of code by clicking the small “play” icons for each section in the main view. Follow any commented instructions (green font) before running each block of code. This includes changing the wav file names referenced in the code to match what you uploaded, and naming your model.
  7. When you run the actual training, you should see an output of it’s progress.
  8. When training is complete, you can run the last block of code to create plots for comparing the target signal and the newly created model.
Example screenshot for using the SmartAmpPro colab script

Plugin Specific Questions

  1. Train models with less parameters (less layers/channels). This will reduce the accuracy, but will run faster in real-time.
  2. Plug in your laptop and set the system performance to the highest setting.
  3. Set your DAW thread priority to the highest setting.
  4. Increase your buffersize in your audio device settings.

--

--

--

Engineer, Software developer, Musician, Family man

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How Can We Ensure the Accuracy of Deep Learning Systems?

Log Analytics with Deep Learning and Machine Learning

Support Vector Machine: Basics

FPN(feature pyramid networks)

Introduction to Computer Vision

Activation Function Sigmoid

PyTorch vs Tensorflow (Deep Learning Model)

Classification Metrics — How To Boost Your Bot Performance Through Data

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Keith Bloemer

Keith Bloemer

Engineer, Software developer, Musician, Family man

More from Medium

Playing Chess With A Generalized AI

Playlist Optimisation as a Traveling Salesperson Problem

Plot on top of a Live Image Preview

Intel vs AMD CPUs: Which Is Better?