While trying to learn fairseq, I was following the tutorials on the website and implementing:
https://fairseq.readthedocs.io/en/latest/tutorial_simple_lstm.html#training-the-model
However, after following all the steps, when I try to train the model using the following:
! fairseq-train data-bin/iwslt14.tokenized.de-en \ --arch tutorial_simple_lstm \ --encoder-dropout 0.2 --decoder-dropout 0.2 \ --optimizer adam --lr 0.005 --lr-shrink 0.5 \ --max-tokens 12000
I receive an error:
`fairseq-train: error: argument --arch/-a: invalid choice: 'tutorial_simple_lstm' (choose from 'fconv', 'fconv_iwslt_de_en', 'fconv_wmt_en_ro', 'fconv_wmt_en_de', 'fconv_wmt_en_fr', 'fconv_lm', 'fconv_lm_dauphin_wikitext103', 'fconv_lm_dauphin_gbw', 'transformer', 'transformer_iwslt_de_en', 'transformer_wmt_en_de', 'transformer_vaswani_wmt_en_de_big', 'transformer_vaswani_wmt_en_fr_big', 'transformer_wmt_en_de_big', 'transformer_wmt_en_de_big_t2t', 'bart_large', 'bart_base', 'mbart_large', 'mbart_base', 'mbart_base_wmt20', 'nonautoregressive_transformer', 'nonautoregressive_transformer_wmt_en_de', 'nacrf_transformer', 'iterative_nonautoregressive_transformer', 'iterative_nonautoregressive_transformer_wmt_en_de', 'cmlm_transformer', 'cmlm_transformer_wmt_en_de', 'levenshtein_transformer', 'levenshtein_transformer_wmt_en_de', 'levenshtein_transformer_vaswani_wmt_en_de_big',....
Some additional info: I am using google colab. And I am writing the entire code until train step into .py file and uploading it to fairseq/models/... path as per my interpretation of the instructions. I am following the exact tutorial in the link.
And, before running it on colab, I am installing fairseq using:
!git clone https://github.com/pytorch/fairseq %cd fairseq !pip install --editable ./
I think this error happens because the command line argument created as per the tutorial has not been set properly.
Can anyone please explain if on any step I would need to do something else.
I would be grateful for your inputs as for a beginner learner such help from the community goes a long way.
Seems you didn't register the SimpleLSTMModel architecture as follow. Once the model is registered you can use it with the existing Command-line Tools.
#register_model('simple_lstm')
class SimpleLSTMModel(FairseqEncoderDecoderModel):
...
.
.
...
Please note that copying .py files doesn't mean you have registered the model. To do so, you need to execute the .py file that includes abovementioned lines of code. Then, you'll be able to run the training process using existing command-line tools.
You should put your .py into:
fairseq/fairseq/models
not to fairseq/models
Related
I have successfully installed pytorch from source using command git clone --recursive https://github.com/pytorch/pytorch.git on my Windows 11 with CPU. But I cannot run the pretrained DL model. It gives error on line: from caffe2.python import workspace. Even though I have workspace on pytorch/caffe2/python/workspace. Please guide if there is anything else I need to do?
Please enable BUILD_CAFFE2 while building PyTorch from source if not already done.
I am trying to run the custom yolo model on my data set in my local machine. I am following some reference code from the kaggle platform. Here first time I encounter the wandb frame work. while doing so I use the following to run the train.py file in my jupyter lab.
!WANDB_MODE="dryrun" python train.py --img 640 --batch 16 --epochs 30 --data D:/Anil/Shawn_Research/Iamge_DataSet/VinBigData/New_Direct/vinbigdata.yaml --weights yolov5x.pt --cache
This work fine on the kaggle platform but in my local machine it shows following:
'WANDB_MODE' is not recognized as an internal or external command, operable program or batch file.
While reading the similar thread I realized I might making mistake related to path variable or Environment variable.
Even I tried to get solution from the official document but couldn't figure out.
Thanks in advance.
Can you share the kernel that you're following? The official kernel has been updated and you can now easily authenticate using the prompter. If you'd still like to disable wandb from syncing data to the cloud, you can do either of these:
Use environment variable:
In a kernel execute this.
import os
os.environ['WANDB_MODE'] = 'offline'
run !wandb offline in a kernel cell
I wanted to build tensor-flow serving from source optimized for my cpu and I have followed the instructions given at tensorflow serving page.
I felt like the instructions is not completed. I was only able to find these three lines of instructions and I have done it.
git clone -b r2.3 https://github.com/tensorflow/serving.git
cd serving
tools/run_in_docker.sh -d tensorflow/serving:2.3.0-devel \
bazel build --config=nativeopt tensorflow_serving/...
So I'm wondering what to do next after the last step? How can I install it in my ubuntu so that I can access it via terminal using the command like this tensorflow_model_server --port=8500...?
After building Tensorflow Serving you can start testing it, a good starting point can be this Serving Basic from Tensorflow website.
I am using pocketsphinx for offline speech recognition. I use lmtool to get language model and dictionary.But the language model has extension .lm but pocketsphinx requires .lm.bin file. So, how can I convert this?
You just need to:
1. Download http://sourceforge.net/projects/cmusphinx/files/sphinxbase/0.8/sphinxbase-0.8-win32.zip
Unpackage sphinxbase-0.8-win32.zip. The folder will be PATH\
In my case thats C:\Users\carope9\Desktop\
Move lm file to PATH\sphinxbase-0.8-win32\bin\Release
Open CMD and write cd PATH\sphinxbase-0.8-win32\bin\Release
Write sphinx_lm_convert -i YOUR_LM_FILE -o YOUR_LM.BIN_FILE
example: sphinx_lm_convert -i es_ES.lm -o es_ES.lm.bin
Your new lm.bin file will be into PATH\sphinxbase-0.8-win32\bin\Release
If you don't use Windows need to download source files from http://sourceforge.net/projects/cmusphinx/files/sphinxbase/0.8/sphinxbase-0.8.tar.gz but I don't know how to install it I'm reading https://sourceforge.net/p/cmusphinx/discussion/help/thread/c67930c0/?limit=25
P/D: According to some people this doesn't work, it worked for me and I don't know how to correct their error. Hope it helps you.
I have run the tutorials and created my own neural network implementation in tensorflow successfully. I then decided to go one bit further an add my own op because I needed to do some of my own preprocessing on the data. I followed the tutorial on the tensorflow site to add an op. I successfully built tensorflow after writing my own c++ file. Then, when I try to use it from my code, I get
'module' object has no attribute 'sec_since_midnight'
My code does get reflected in bazel-genfiles/tensorflow/python/ops/gen_user_ops.py so the wrapper does get generated for it correctly. It just looks like I can't see the tensorflow/python/user_ops/user_ops.py which is what imports that file.
Now when I when I go through the testing of this module, I get the following odd behavior. It should not pass because the expected vector I give it does not match what the result should be. But maybe the test never gets executed despite saying passed?
INFO: Found 1 test target...
Target //tensorflow/python:sec_since_midnight_op_test up-to-date:
bazel-bin/tensorflow/python/sec_since_midnight_op_test
INFO: Elapsed time: 6.131s, Critical Path: 5.36s
//tensorflow/python:sec_since_midnight_op_test (1/0 cached) PASSED
Executed 0 out of 1 tests: 1 test passes.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
Hmmm. Well, I uninstalled tensorflow and then I reinstalled from what I just built and what I wrote was suddenly recognized. I have seen this behavior twice in a row now where an uninstall is necessary. So to sum, the steps after adding my own op are:
$ pip uninstall tensorflow
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
# To build with GPU support:
$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
# The name of the .whl file will depend on your platform.
$ pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl