add reproducibility post
After Width: | Height: | Size: 122 KiB |
After Width: | Height: | Size: 6.7 KiB |
After Width: | Height: | Size: 320 KiB |
After Width: | Height: | Size: 3.7 KiB |
After Width: | Height: | Size: 3.6 KiB |
|
@ -0,0 +1 @@
|
|||
<mxfile host="Electron" modified="2022-01-23T16:01:51.151Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/16.4.0 Chrome/96.0.4664.110 Electron/16.0.7 Safari/537.36" etag="HoLs075ZdU1bCl8Z0Jc_" version="16.4.0" type="device"><diagram id="_8TM1SlNIWHu6EsD3Jw_" name="Page-1">7Vxdj6M2FP01kdqHjcDGfDxu5qNbaauuOpW225fKG5yELcEpcTZJf31tsJNgMzMkJTEJMxpp8MUYfM6xfe81wwDezTc/5Xgx+4XGJB0AJ94M4P0AANcDYCB+nXhbWqIoLA3TPIllpb3hKfmXSKMjraskJstKRUZpypJF1TimWUbGrGLDeU7X1WoTmlbvusBTYhiexjg1rZ+TmM1KawiCvf0DSaYzdWfXj8ozc6wqy54sZzim6wMTfBjAu5xSVh7NN3ckFeApXMrrHp85u3uwnGSsyQUzFoy262/40+Kvj/MVfFxuHv58J1v5jtOV7LB8WLZVCPBWONi8MFrPEkaeFngszqw539w2Y/OUl1x+iJeLkoFJsiH8piPZNskZ2Tz70O4OCq4hQueE5VteRV6AQomelA9U5fUBGdI0O+BB2bCkf7preY8QP5AgHQEYMAD7wK+iA8Cbc34Wd5wLmRGSJtlUAInjQrIx3hrIclhYFcIly+nf5I6mNOeWjGYC9kmSppoJp8k048Uxh5Fw+0iAnHDVvpcn5kkcp89xltNVFguG7p12WIJelSVgkuTWsQTPxZLfQNYxH+eySHM2o1Oa4fRhb9Vg2tf5SOlC8vWNMLaVkxZeMVpls7ynuNHLuPLnoqt8TF7oEJRTH86nhL1QD9XzlJMUs+R79TlaRx0aqP/O1ZxxeHID/iq4r8wsbUg0qErUhTUaBTUa9c+lUa+BRrP4vVjDxEhP8XKZjKu41A3kVuQGGsoN1mN+gCmqgVTZGqtS3uETTXhHdpT6+tqANKrKbsqrDldGraEAag05WkMlDkZDBe27bp+uBGQqAY34pFouKi6QB8jzy4NIPK84ABGveX/D6wrQ1hXPHLSgbtCebWEJGwxam/6S1zV/Keo4YLBrgKnAp7OIgc4h1vUgxu0cYjUrTqcQ87SAAgW2EWsSUVidxzqHWGAg9uuKLVbC7/rhiU7YHG9+7BiGURXDoCaOvSyGTdyNo2IEsknYH+J4iGTpi6wnju83B9Xut6qQ8b4cXCSKXw7P7S8rSuq6QXvxiJriXw1IynFqKyLRBYRcTRlNIxJ9/vP0KPTMEYnbxG3rhfCcqxAe0CJYLzhReNB3h4GLAhAGCCAYhEFV0FH1dHRRVYImrvGJqvRc2FyXvPCJ5AnvjwiS29ZcZFVJ+syjL26NlRQ+45BcSixNooI+TGHhVcxgHoBDB0b7H7+lCU1fknU9n1uGoAsybHNFtKkS1No6Z3t2MvdK+jk7Nffsn9naujYHy/Z81Pq207UKr7FnfxvC82wLr0nOsQ/Ci65Cd/BmdNckc9sH3TWOA6zqzrsZ3Zn5737q7jocvNYiC+u6a3/P4EQN2Q5U/abCC6wutLeyZwDa3zO4UuGhqxSer+vlVOEF+ttXZxYefMv0Vt4jfF13VnN4NzPhQfCmu+N051jVXYgqcjFe22iqOz0lsxP0pXT3ljs+TndWN1T1Wepk2ekZmYvL7i1zfJzswpuQnZ6Qubjs3vLGR66yNyE7PR9zcdmZaWNn6ALoIT8w9NeNf7iZ0Iyp5gcAPj46/GfQzhvsms/u1bwZ69eo6mz/hwPN7CofmR4Eaj+5T+yAzrFj5iCdIY/PQtfpIT16wGufHjNT5wwhCiK3j6NHT9hbp0ctodroCV1fRX59ogd1jh4z3ajoMT3TTtIjyHl8PBM9jm16zKycmtyc/tFjTG4OskyPmbxSrkEP6TFcA+v0mEke6Vf3kB3Dr7bOjpkLUUFp2D96jKDUOj2tv3rxHHq2d7itblzvNqrVfnOAqk2cmtoML7xx7dW9MeGnYtTGyfeKbvx/VuI7cSMxqN/J8fme10jJhO3P8qOp/Fu0slzgTNmcoRNA5IiJojzJH/nw/IG5uLmy9m1W2X1FaPf+luVZBdWFgwWXwZWsya0mUzR6fNdyvIHqdmPbGMScZMfzXMcTSZmywa/522jV4htjtIKh7fFa50JLKkEPEzg6Rb4TWKeo5n8CEB9wkQdh5IuPH4ivq3HWEPc1YOi+8JG1Y8kqxvoLVPEleZxk049FtXtvb/lN4iJMlF8+SQsncsYvJLyF0UJ4KQVQaCQ6w/sgvEPx5He87O7L/FdUzznhGX98nBQ8Erxka7JkhTQYZvhr0dNzr7Yq4FQuWM0X3nZ+/mW0UfPeflUbjiYP5wV5vPIVm3MulDqyNR8EctVA+J/I8uL+O8ald7z/GjR8+A8=</diagram></mxfile>
|
After Width: | Height: | Size: 55 KiB |
After Width: | Height: | Size: 6.0 KiB |
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
title: "Here be Dragons: Deep Learning Reproducibility"
|
||||
date: 2022-01-22T13:02:31Z
|
||||
description: A harrowing tale of trying to solve the impossible and failing
|
||||
draft: false
|
||||
type: post
|
||||
url: /2022/01/22/deep-learning-reproducibility-here-be-dragons
|
||||
resources:
|
||||
- name: feature
|
||||
src: images/feature.jpg
|
||||
tags:
|
||||
- machine-learning
|
||||
- work
|
||||
- phd
|
||||
---
|
||||
|
||||
***A harrowing tale of trying to solve the impossible and failing. Episode 5 in this year's run at [the #100DaysToOffload challenge](https://100daystooffload.com/). See the full series [here](/tags/100daystooffload/)***
|
||||
|
||||
{{<figure src="images/feature.jpg" caption="Photo by **[Tim Mossholder](https://www.pexels.com/@timmossholder?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels)** from **[Pexels](https://www.pexels.com/photo/dragon-graffiti-5614516/?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels)**">}}
|
||||
|
||||
|
||||
### That's So Random: Randomness in Machine Learning
|
||||
|
||||
Training Machine Learning and in particular Deep Learning models generally involves a lot of random number generation. If we're training a supervised [classifier](https://en.wikipedia.org/wiki/Statistical_classification) or [regressor](https://en.wikipedia.org/wiki/Simple_linear_regression), we tend to [randomly split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) our annotated data training set from our test set. Also, if you are training a new neural network it is fairly standard practice to randomly initialize the connections between the neurons (the weights) with a random number ([here's why](https://machinelearningmastery.com/why-initialize-a-neural-network-with-random-weights/)).
|
||||
|
||||
|
||||
However, all this randomness goes against one of the basic tenets of scientific research: reproducibility. I.e. in order to be sure that the results of an experiment weren't just down to luck or chance (or that the person who carried out the work didn't make up their results), it is really important that the results can be reproduced.
|
||||
|
||||
### Loading the Dice: Pseudo-Randomness and Seeds
|
||||
|
||||
So how do we reconcile these things? Well, by happy coincidence, random operations on computers are not really random at all, they are [pseudo-random](https://en.wikipedia.org/wiki/Pseudorandomness). That means that although the numbers generated may appear random, they are actually predictable based on an initial starting point called the seed number. Most of the time the seed is set to the current time so you'll get pretty much perfectly random numbers every time.
|
||||
|
||||
Within the machine learning community, it's best practice to fix the random seed to a known value for all experiments. That way the random operations are still statistically random for the purposes of the model but they are always exactly the same for every subsequent run of the program which means that, in theory, the program should be perfectly reproducible as long as the random seed is set.
|
||||
|
||||
### A Walkthrough of My Fool's Errand
|
||||
|
||||
Earlier this week I panicked because an experiment I've been working on was giving fairly different results depending on whether I trained it on my desktop or a cloud VM. We're talking performance levels of each other within 10% F1 score but the thing is, I'd set all my random seeds to `42` so, in theory, the results should be exactly the same. What was going on?
|
||||
|
||||
I meticulously sank lots of time into checking that the files I was using for testing and training were exactly the same. I use [dvc](https://dvc.org/) for tracking all of my experiments including parameters, input files and output files so it's possible (I wouldn't go as far as trivial) for me to check that all of these things add up by looking at the file hashes in the DVC lock file.
|
||||
|
||||
Everything looked the same so then I started stepping through the code line-by-line in interactive debuggers running on both my desktop and the server.
|
||||
|
||||
I was able to verify that the exact same batches of text were being passed in with exactly the same mapping onto embeddings:
|
||||
|
||||
Desktop inputs in memory: ![](images/inputs_desktop.png)
|
||||
Server inputs in memory: ![](images/inputs_server.png)
|
||||
|
||||
I ran the inputs through a single pass of the [RoBERTa-based model](https://huggingface.co/docs/transformers/model_doc/roberta) and that was when things got a bit weird:
|
||||
|
||||
Desktop RoBERTa output: ![](images/desktop_output.png)
|
||||
|
||||
Server RoBERTa output: ![](images/server_output.png)
|
||||
|
||||
The outputs are in the same ballpark but they are quite different even within 2 significant figures. This should be impossible since I set the random seed to identical values on both machines, the input files are the same and the software libraries are the same. What is going on?
|
||||
|
||||
Well... it turns out that this is a [known problem](https://discuss.pytorch.org/t/different-result-on-different-gpu/102502/2) according to some of the community experts over at the [pytorch forum](https://pytorch.org/). The problem comes down to the way that the way that floating point operations are handled by different GPU architectures. Floating points are approximate representations of irrational numbers - they can be very precise but they're approximations all the same. Once you get to a certain level of precision it becomes a wild west. As [alband](https://discuss.pytorch.org/t/reproducibility-over-different-machines/63047/12) says over at the pytorch forum:
|
||||
|
||||
> This is a hardware/floating point limitation. The floating point standard specifies only how close from the real value the result should be. But the hardware can return any value that is that close. So different hardware can give different result
|
||||
|
||||
|
||||
The issue is that neural networks multiply the input signal by a given set of weights to some degree of precision. The differences are then multiplied by each consecutive layer in the network and finally the error signal. This process is repeated thousands or millions of times during training of the network.
|
||||
|
||||
|
||||
{{<figure src="images/neural.drawio.png" caption="Visualisation of how a small change in calculations made by 2 different computers (red and blue) in the network can propagate in the output and error signal.">}}
|
||||
|
||||
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) is very sensitive to these fluctations in the gradient. If you increase the output slightly in one direction you might end up in a local error maxima and in the other, a minima. These small fluctuations can potentially multiply out into relatively large changes to the model for large batches of data hence the differences I was seeing.
|
||||
|
||||
{{<figure src="images/Gradient_ascent.png" caption="Visual representation of gradient descent from [wikipedia](https://commons.wikimedia.org/wiki/File:Gradient_ascent_(surface).png)**">}}
|
||||
|
||||
|
||||
## So what can be done?
|
||||
|
||||
Well the guys over at pytorch [don't think much can be done](https://discuss.pytorch.org/t/reproducibility-over-different-machines/63047/14) although they suggested that it's worth double checking that my results are reproducible on the same machine. I ran the model again a couple of times on my desktop and got exactly the same outputs so I'm confident that this is the case.
|
||||
|
||||
Conclusions are as follows:
|
||||
|
||||
- Deep Learning models are only ***truly*** reproducible on the same machine and hardware
|
||||
- It seems reasonable to believe that 2 machines with the same CPU and GPU ***should*** behave exactly the same although I've not tested this since I don't have 2 identical machines that I can try running my experiments on.
|
||||
- Even on the same machine results may not be reproducible if you run your model on your GPU and then on your CPU. Furthermore, some operations on GPUs are non-deterministic (they change even if you set your random seed) so it is worth reading the [pytorch documentation](https://pytorch.org/docs/stable/notes/randomness.html) about how to ensure your best chances at reproducing your work.
|
||||
|
||||
Therefore:
|
||||
- If you are making changes to the same model - e.g. changing hyperparameters, input data or even network architecture, it is important to use the same machine (or possibly identical machines with the same software + hardware stack) to run your experiments. ***If you train on separate machines you may not be getting a like-for-like benchmark of your models***.
|
||||
- If you plan to deploy the model into a production inference environment, It is probably worth training your model on the same hardware as you plan to run it on. If that's not possible then you should run the model over your evaluation set in the final prod environment and see a) if the results are very different and b) if you are happy/accepting of those differences.
|
||||
|
||||
|
||||
<a href="https://brid.gy/publish/twitter"></a>
|
||||
<a href="https://brid.gy/publish/mastodon"></a>
|