I have trained Kaldi models (tri1b ... tri3b) and I am get WERs. I have also successfully installed sclite as well inside kalde/tools
I have read through the few pages of sclite documentations available. the only useful information I have been able to gather is that I need a ref.txt and hyp.txt.
can anyone please guide me step by step how to run the sclite too?
Related
I am new to ML and W&B, and I am trying to use W&B to do a hyperparameter sweep. I created a few sweeps and when I run them I get a bunch of new runs in my project (as I would expect):
Image: New runs being created
However, all of the new runs say "no metrics logged yet" (Image) and are instead all of their metrics are going into one run (the one with the green dot in the photo above). This makes it not useable, of course, since all the metrics and images and graph data for many different runs are all being crammed into one run.
Is there anyone that has some experience in W&B? I feel like this is an issue that should be relatively straightforward to solve - like something in the W&B config that I need to change.
Any help would be appreciated. I didn't give too many details because I am hoping this is relatively straightforward, but if there are any specific questions I'd be happy to provide more info. The basics:
Using Google Colab for training
Project is a PyTorch-YOLOv3 object detection model that is based on this: https://github.com/ultralytics/yolov3
Thanks! 😊
Update: I think I figured it out.
I was using the train.py code from the repository I linked in the question, and part of that code specifies the id of the run (used for resuming).
I removed the part where it specifies the ID, and it is now working :)
Old code:
wandb_run = wandb.init(config=opt, resume="allow",
project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem,
name=save_dir.stem,
id=ckpt.get('wandb_id') if 'ckpt' in locals() else None)
New code:
wandb_run = wandb.init(config=opt, resume="allow",
project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem,
name=save_dir.stem)
I am trying to use some pre-trained model from the intel Pretrained model zoo. Here is the address of that site https://docs.openvinotoolkit.org/latest/_models_intel_index.html. Is there any specific command for downloading these models in a Linux system.
downloader.py (model downloader) downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer;
USAGE
The basic usage is to run the script like this:
./downloader.py --all
This will download all models into a directory tree rooted in the current directory. To download into a different directory, use the -o/--output_dir option:
./downloader.py --all --output_dir my/download/directory
The --all option can be replaced with other filter options to download only a subset of models. See the "Shared options" section.
You may use --precisions flag to specify comma separated precisions of weights to be downloaded.
./downloader.py --name face-detection-retail-0004 --precisions FP16,INT8
By default, the script will attempt to download each file only once. You can use the --num_attempts option to change that and increase the robustness of the download process:
./downloader.py --all --num_attempts 5 # attempt each download five times
You can use the --cache_dir option to make the script use the specified directory as a cache. The script will place a copy of each downloaded file in the cache, or, if it is already there, retrieve it from the cache instead of downloading it again.
./downloader.py --all --cache_dir my/cache/directory
The cache format is intended to remain compatible in future Open Model Zoo versions, so you can use a cache to avoid redownloading most files when updating Open Model Zoo.
By default, the script outputs progress information as unstructured, human-readable text. If you want to consume progress information programmatically, use the --progress_format option:
./downloader.py --all --progress_format=json
When this option is set to json, the script's standard output is replaced by a machine-readable progress report, whose format is documented in the "JSON progress report format" section. This option does not affect errors and warnings, which will still be printed to the standard error stream in a human-readable format.
You can also set this option to text to explicitly request the default text format.
See the "Shared options" section for information on other options accepted by the script.
More details about model downloader can be found from the following url: https://docs.openvinotoolkit.org/latest/_tools_downloader_README.html
As mentioned in the following url:-https://docs.openvinotoolkit.org/latest/_models_intel_index.html, you can download the pretrained models using Model Downloader.(/deployment_tools/open_model_zoo/tools/downloader)
More details about model downloader can be found from the following url:
https://docs.openvinotoolkit.org/latest/_tools_downloader_README.html
I have been working on a way to export models from Simulink to a FMU, which we will open source when we have a not-so-buggy version. Me and a collegue finally got a working version and extracted our first FMU from just a zip.
As it turns out, we must be doing something wrong within the program. Our FMU works fine, except for inputs. None of the inputs seem to be working. This have been tested mutliple times, like having a constant go to an out, which works, and I have also tested working FMUs made from our other non-open-source software and they work. I just can't seem to find what is different from theirs to ours FMU.
Here is a dropbox link if anyone wants the source of the test FMU. The model is simple, with one input going straight towards the output and one output getting fed from a constant. Currently, I can read the one output getting a constant, but not the input one. It's always 0. The dropbox folder includes the generated zip file from the model, the model.slx file, the generated FMU and also a folder containing everything inside the FMU. I know we aren't including all sources inside the FMU just yet, but I will fix that when we find out what our issue is with the FMU's. The sources exist inside the zip, so nothing is left out.
If anyone with experience around FMI has had this issue before or maybe have a clue what we could be doing wrong, I would be so greateful if you could share your experience.
I fixed my issue by changing the FMUSDK fmuTemplate.c file to call functions and handle my own inputs and outputs instead.
I am trying to use the Sphinx4 library for speech recognition, but I cannot seem to figure out the correct combination of acoustic model-dictionary-language model. I have tried out various combinations and I get a different error every time.
I am trying to follow the tutorial on http://cmusphinx.sourceforge.net/wiki/tutorialsphinx4. I do not have a config.xml as I would if I was using ConfigurationManager instead of Configuration, because there is no perceivable way of passing the location of the config file to the Configuration itself (ConfigMgr takes it as an argument to the constructor); and that might be my problem right there. I just do not know how to point to one, and since the tutorial says "It is possible to configure low-level components of the application through XML file although you should do that ONLY IF you understand what is going on.", I assume having a config.xml file is not compulsory.
Combining the latest dictionary (7b - obtained from Sourceforge) with the latest acoustic model (cmusphinx-en-us-5.2.tar.gz - from SF again) and the language model (cmusphinx-5.0-en-us.lm.gz - from SF again) results in NullPointerException in startRecognition. The issue is similar to the problem here: sphinx-4 NullPointerException at startRecognition, but the link given in the answer no longer works. I obtained 0.7a from SF (since that is the dict the link seems to point at), but I am getting even earlier in the execution Error loading word: ;;; when I use that one. I tried downloading latest models and dict from the Github repo, that results in java.lang.IndexOutOfBoundsException: Index: 16128, Size: 16128.
Any help is much appreciated!
You need to use latest code from github
http://github.com/cmusphinx/sphinx4
as described by tutorial
http://cmusphinx.sourceforge.net/wiki/tutorialsphinx4
Correct models (en-us) are already included, you should not replace anything. You should not configure any XML files, use samples as provided in the sources.
I am new stanford to corenlp and trying to use it. I was able to run sentimental analysis pipeline and corenlp software. While when I am trying to execute evaluate tool it is asking for model sentiment.ser.gz.
java edu.stanford.nlp.sentiment.Evaluate edu/stanford/nlp/models/sentiment/sentiment.ser.gz test.txt
I could not find the model in the software that I downloaded from stanford site or anywhere on internet.
Can someone please guide if we can create our own model or if I can find anywhere on the internet.
Appreciate your help.
The file stanford-corenlp-full-2014-01-04.zip contains another file called stanford-corenlp-3.3.1-models.jar. The latter file is a ZIP archive that contains the model file you are looking for.
CoreNLP is able to load the model file from the classpath if you add the stanford-corenlp-3.3.1-models.jar to your Java classpath, so you do not have to do anything.
It also appears the documentation on running the Evaluate tool is slightly outdated.
The correct call goes like this (tested with CoreNLP 3.3.1 and the test data downloaded from the sentiment homepage):
java -cp "*" edu.stanford.nlp.sentiment.Evaluate -model edu/stanford/nlp/models/sentiment/sentiment.ser.gz -treebank test.txt
The '-cp "*"' adds everything in the current directory to the classpath. Thus, the command above must be executed in the directory to which you extracted CoreNLP, otherwise it will not work.
If you do not add the "-model" and -treebank" to the call, you'll an error message like this
Unknown argument test.txt
If you do not supply a treebank and a model, you get another error message
Exception in thread "main" java.lang.NullPointerException
at java.io.File.<init>(File.java:277)