Tickle.py Dev Diary #3

Above: My training running, slowly… 10 days before we’re done.

Above: My training running, slowly… 10 days before we’re done.

Hi everyone, this is going to be a bit of a shorter and sweeter update but, I finally got my audio chunked! There were some issues where the script was counting the zero padded audio as being a different dimension to the non-zero padded examples so I had to discard all of the zero padded training examples.

In addition tensorflow had issues recognising my GPU so I had to use some fixes that I found on Sevag’s very nice github fork and project blog where he described facing some of the same issues that I had https://1000sharks.xyz/prism_samplernn.html .

With all of this out of the way I could finally train my model…..

Above: Training was an exciting process.

Above: Training was an exciting process.

At first the training script kept crashing because I overestimated the power of my graphics card and I had to use a lower batch setting than I had originally anticipated (down from 128 to 8). This had the knock on effect of increasing the training time by quite a bit (10 days to run all 100 epochs).

From here I was debating ending the training at the 20th epoch to save time and focus on generating sound. Stopping training early could also have benefits as that means I won’t overfit the data. Also ASMR has fewer features to extract than the music that this model was made for so potentially I could have a working model as the methodology for success is very different to traditional musical generation models. (just need to make sounds that make the brain go ping)

As one does, I posted about my predicament on twitter and got a really good answer!

Above: the oracle has beseeched me….

Above: the oracle has beseeched me….

Through Kevin’s advice I had found out that the PrismSampleRNN implementation that I was using also had a Google Colab notebook. And with Google Colab I could use Colab Pro to gain access to a much beefier GPU which could solve all of my issues. (And since the implementation is the same I don’t need to re-chunk my training examples). I originally signed up for Google Console Cloud and got $300 of free services but it turns out that I probably wont need that, due to there being a Colab subscription service which handles GPU allocation separately.

Above: Now I have access to a Tesla T4 card which is going to make my simulations go VROOOOMMMM!!!!!

Above: Now I have access to a Tesla T4 card which is going to make my simulations go VROOOOMMMM!!!!!

As it stands now I am currently importing 5.5GB of chunked training data to my Google Drive so I can get the Colab Notebook working as soon as possible and I am still running my local training model as I think that it would be good to compare the results (And I would also have something to fall back on if the Colab notebook doesn’t work).

Previous
Previous

Tickle.py Dev Diary #4

Next
Next

Tickle.py Dev Diary #2