how to make yolov7 training with 1 class from cocodataset - pytorch

I am working on yolov7, train.py files.
I want to use cocodataset, but take 1 class for training: person. Coco have 80 class.
Can i control this from train.py?
Train py has ;
parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
option. But i have no idea how can i use this command.
Also, train log says;
tensorboard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
But page gives nothing.

if you want to use all 80 classes you have to use detect.py. With train.py you can train your custom dataset to detect your custom object.
Tensorboard. First, you have to install with
$ pip install tensorboard
if you have trained with train.py, you can run tensorboard with this command:
$ tensorboard --logdir=runs/train
And you can see results in http://localhost:6006/

Related

I have created a custom NER using spacy and i want to train it with additional data but what to change in config.cfg file?

I have created a spacy NER model for named entity recognition and its having tok2vec and ner as components in the pipeline. Now i want to add some more data to it, so i am using a model-best directory from where I can load my trained model for predictions. If i follow the documentation without changing anything from config.cfg file then the newly created model-best have no information about it's previous trained data.
! python -m spacy convert one.json ./ -t spacy
! python -m spacy init fill-config base_config.cfg config.cfg
! python -m spacy train config.cfg --output ./ --paths.train ./one.spacy --paths.dev ./one.spacy
After running them two folders got created (model-best and model-last)
now to train it with new data i tried like this:
import spacy
from spacy.tokens import DocBin
from tqdm import tqdm
import json
nlp=spacy.load('model-best')
f = open('two.json')
TRAIN_DATA = json.load(f)
db = DocBin()
for text, annot in tqdm(TRAIN_DATA['annotations']):
doc = nlp.make_doc(text)
ents = []
for start, end, label in annot["entities"]:
span = doc.char_span(start, end, label=label, alignment_mode="contract")
if span is None:
print("Skipping entity")
else:
ents.append(span)
doc.ents = ents
db.add(doc)
db.to_disk("./training_data.spacy")
! python -m spacy init fill-config base_config.cfg config.cfg
! python -m spacy train config.cfg --output ./ --paths.train ./training_data.spacy --paths.dev ./training_data.spacy
After running them, it replaced my model-best folder with new one and it can only recognnise the new data now
what changes should i make in my config.cfg inorder to train it properly so that it can remember both old data and new data?
To train over an existing model, you can define the source component in your base_config.cfg
[components.ner]
source = "<path_to_model-best>"
component = "ner"
This information is available on the spacy documentation website here:
https://spacy.io/usage/processing-pipelines#sourced-components

Spacy ValueError: [E002] Can't find factory for 'relation_extractor' for language English (en)

I want to train a "relation extractor" component as in this tutorial. I have 3 .spacy files (train.spacy, dev.spacy, test.spacy.
I run:
python3 -m spacy init fill-config config.cfg config.cfg
followed by
python3 -m spacy train --output ./model config.cfg --paths.train train.spacy --paths.dev dev.spacy
Output:
ValueError: [E002] Can't find factory for 'relation_extractor' for language English (en). This usually happens when spaCy calls `nlp.create_pipe` with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator `#Language.component` (for function components) or `#Language.factory` (for class components).
Available factories: attribute_ruler, tok2vec, merge_noun_chunks, merge_entities, merge_subtokens, token_splitter, doc_cleaner, parser, beam_parser, lemmatizer, trainable_lemmatizer, entity_linker, ner, beam_ner, entity_ruler, tagger, morphologizer, senter, sentencizer, textcat, spancat, future_entity_ruler, span_ruler, textcat_multilabel, en.lemmatizer
I have tried the two config files here but the output is the same.
To enable Transformers I have installed spacy-transformers downloaded en_core_web_trf via
python3 -m spacy download en_core_web_trf
A similar issue was mentioned on GitHub but that solution is for an other context. Similarly, on GitHub somebody raised the same issue with no solution. Here too was not solved.

How to convert the tensor model into an onnx file

I have a tensorrt engine file, a builder in jetson nx2. But my onnx file is missing. How to convert the tensor model to onnx file?
e.g: a.engine file -> a.onnx
Please give me a suggestion thanks
All I retrieved from the search engine are from onnx to tensor model.
You will need Python 3.7-3.10,
Install tf2onnx using pip
pip install -U tf2onnx
use to following command. You will need to provide the following:
the path to your TensorFlow model (where the model is in saved model format)
a name for the ONNX output file
python -m tf2onnx.convert --saved-model tensorflow-model-path --output model.onnx
The above command uses a default of 13 for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the --opset argument to the command.
python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 17 --output model.onnx
For checkpoint format:
python -m tf2onnx.convert --checkpoint tensorflow-model-meta-file-path --output model.onnx --inputs input0:0,input1:0 --outputs output0:0
Follow the official tf2onxx repository to learn more: https://github.com/onnx/tensorflow-onnx

Calling TensorBoard from Keras to inspect diagnostics

I am trying to use TensorBoard with Keras and I follow the instructions in this short tutorial: http://fizzylogic.nl/2017/05/08/monitor-progress-of-your-keras-based-neural-network-using-tensorboard/
My code is:
from time import time
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.callbacks import TensorBoard
# Compile the model
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
tensorboard = TensorBoard(log_dir="logs/{}".format(time()))
# Fit the model
model.fit(X, y, validation_split = 0.3, epochs=30, callbacks = [tensorboard])
The code is executed without any problem.
Following the advice of the tutorial:
Monitoring progress Now that you have a tensorboard instance hooked up
you can start to monitor the program by executing the following
command in a separate terminal:
tensorboard --logdir=logs/
I open a terminal and execute the aforementioned command. This is what I get:
(base) C:\Users\Alienware\Documents>tensorboard --logdir=logs/
2019-01-07 22:02:56.109894: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
TensorBoard 1.8.0 at http://ALIENWARE-PC:6006 (Press CTRL+C to quit)
W0107 22:04:55.763794 Thread-1 application.py:274] path /[[_dataImageSrc]] not found, sending 404
W0107 22:04:55.779416 Thread-1 application.py:274] path /[[_imageURL]] not found, sending 404
I then open the webpage. This is what I see:
How can I sort this out?
It needs to the exact folder that the data is saved into. You format log_dir="logs/{}".format(time()) the directory using the current which is obviously not the same directory as logs/. By default the logs directory is keras.callbacks.TensorBoard(log_dir='./logs',...).
You would need to either remove the time formatting or start tensorboard on the correct directory.

How to use densenet in Keras

I notice densenet has been added to keras (https://github.com/keras-team/keras/tree/master/keras/applications)and I want to apply it in my project but when I tried to import it in jupyter anaconda, I got an error saying:
module 'keras.applications' has no attribute 'densenet'
it seems like densenet has not been incorporated into current version of keras.
Any idea how can I add it myself?
Densenet was added in keras version 2.1.3. What version of keras are you running?
Have you tried to update keras with pip install keras --upgrade since January?

Resources