I'm trying to train a gensim sgns model and in the process I measure the loss during which I'm calculating as
loss = model.running_training_loss / model.corpus_count,
however, I noticed that if I change my worker thread I get different losses keeping all other parameters same. Especially if I keep my worker thread a 1 I get a really high loss and If I increase threads I get less loss. An instance
thread loss
worker=1 20.40519721
worker=10 2.714875407
worker=16 1.239528453
Up through gensim 3.5.0, the loss value reported may not be very sensible, only resetting the tally each call to train(), rather than each internal epoch. There are some fixes forthcoming in this issue:
https://github.com/RaRe-Technologies/gensim/pull/2135
What version of gensim are you using, and what is your code doing to collect the loss data?
Related
I'm using the AllenNLP (version 2.6) semantic role labeling model to process a large pile of sentences. My Python version is 3.7.9. I'm on MacOS 11.6.1. My goal is to use multiprocessing.Pool to parallelize the work, but the calls via the pool are taking longer than they do in the parent process, sometimes substantially so.
In the parent process, I have explicitly placed the model in shared memory as follows:
from allennlp.predictors import Predictor
from allennlp.models.archival import load_archive
import allennlp_models.structured_prediction.predictors.srl
PREDICTOR_PATH = "...<srl model path>..."
archive = load_archive(PREDICTOR_PATH)
archive.model.share_memory()
PREDICTOR = Predictor.from_archive(archive)
I know the model is only being loaded once, in the parent process. And I place the model in shared memory whether or not I'm going to make use of the pool. I'm using torch.multiprocessing, as many recommend, and I'm using the spawn start method.
I'm calling the predictor in the pool using Pool.apply_async, and I'm timing the calls within the child processes. I know that the pool is using the available CPUs (I have six cores), and I'm nowhere near running out of physical memory, so there's no reason for the child processes to be swapped to disk.
Here's what happens, for a batch of 395 sentences:
Without multiprocessing: 638 total processing seconds (and elapsed time).
With a 4-process pool: 293 seconds elapsed time, 915 total processing seconds.
With a 12-process pool: 263 seconds elapsed time, 2024 total processing seconds.
The more processes, the worse the total AllenNLP processing time - even though the model is explicitly in shared memory, and the only thing that crosses the process boundary during the invocation is the input text and the output JSON.
I've done some profiling, and the first thing that leaps out at me is that the function torch._C._nn.linear is taking significantly longer in the multiprocessing cases. This function takes two tensors as arguments - but there are no tensors being passed across the process boundary, and I'm decoding, not training, so the model should be entirely read-only. It seems like it has to be a problem with locking or competition for the shared model resource, but I don't understand at all why that would be the case. And I'm not a torch programmer, so my understanding of what's happening is limited.
Any pointers or suggestions would be appreciated.
Turns out that I wasn't comparing exactly the right things. This thread: https://github.com/allenai/allennlp/discussions/5471 goes into all the detail. Briefly, because pytorch can use additional resources under the hood, my baseline test without multiprocessing wasn't taxing my computer enough when running two instances in parallel; I had to run 4 instances to see the penalty, and in that case, the total processing time was essentially the same for 4 parallel nonmultiprocessing invocations, or one multiprocessing case with 4 subprocesses.
I have a for loop in my code and in each iteration I augment some processed data and train my TF model again. After a while it takes longer than expected to process my code. I suspect about CPU usage since I running on multiple cores. How can I fix that?
Currently, I can "safely" interrupt Keras neural net training via:
early stopping callback (once accuracy improvements are small)
stopping the execution and restarting from the last saved model
However, I'm looking for a way to have a more robust way to interrupt the training.
Is there a way to create a local dummy (flag) file and check its existence in a callback after each epoch ends? How can this be implemented? Are there another way to interrupt training on a single CPU/GPU architecture, running in PyCharm (Windows 10 x64, Python 3.6, Anaconda 3, Keras 2.1.2, TensorFlow 1.4, PyCharm)?
I suppose, I could start in debug mode of PyCharm and pause execution whenever needed, but debugger is slow and I'm not sure if the pause will propagate through Keras and to underlying TensorFlow.
You can extend the keras.callbacks.Callback class to create your own callback with your own logic:
You can use code similar to the EarlyStoppingCallback
https://github.com/keras-team/keras/blob/master/keras/callbacks.py#L429
you add a listener for the epoch end event.
def on_epoch_end(self, epoch, logs=None):
if you_are_tired_of_training_condition:
self.stopped_epoch = epoch
self.model.stop_training = True
I am using Keras/TensorFlow (GPU) to create a time series forecasting model. I have 100x of time series and want to train a network for each of them.
Running a few time series in a row is fine, but once I run 100x or 1000x then it appears that the training time of each model increase slowly (but surely). Is there a simple reason for this ?
Below is code to reproduce the issue (note that it could take a while to run).
https://gist.github.com/mannsi/c5666c4b786c35c3443beea6d13a32fe
On my machine the first iteration takes 10s, iteration #250 takes 16s and iteration #500 takes 25s.
I am new to Neural Networks and Keras/TF so maybe this is totally normal but I did not factor this in when doing my back-of-the-envelope time calculations.
System info:
python 3.5
keras (1.2.2)
tensorflow-gpu(1.0.0)
EDIT: I tested the same code on a TensorFlow CPU backend and I see the exact same behavior there.
It's possible that there is some overhead building up in the computation graph over each iteration. Use the Keras backend function K.clear_session() to reset the underlying Tensorflow session between each run.
Could it be that your gpu warms up and therefore the power is lowered to reduce temperature?
How long does the first iteration take if you relaunch it after having done many iterations?
As your model parameters didn't change, you have to compile only once the model. Then you can build a loop for fitting it.
You instantiate a model and compile it in every loop, that's why your memory consumption grows continuously.
What is the reason of such issue in joblib?
'Multiprocessing backed parallel loops cannot be nested below threads, setting n_jobs=1'
What should I do to avoid such issue?
Actually I need to implement XMLRPC server which run heavy computation in background thread and report current progress through polling from UI client. It uses scikit-learn which are based on joblib.
P.S.:
I've simply changed name of the thread to "MainThread" to avoid such warning and everything looks working good (run in parallel as expected without issues). What might be a problem in future for such workaround?
I had the same warning while doing predictions with sklearn within a thread, using a model I loaded and which was fitted with n_jobs > 1. It appears when you pickle a model it is saved with its parameters, including n_jobs.
To avoid the warning (and potential serialization cost), set n_jobs to 1 when pickling your models:
clf = joblib.load(model_filename).set_params(n_jobs=1)
This seems to be due to this issue in JobLib library. At the moment of writing this seems to be fixed but not released yet. As written in the question, a dirty fix would to rename the main thread back to MainThread:
threading.current_thread().name = 'MainThread'