recognizer= cv2.face.createLBPHFaceRecognizer()
if os.path.exists("recognizer\\trainingData_LBPHF.yml"):
recognizer.load("recognizer\\trainingData_LBPHF.yml")
IDs,faces=retrainer(directory)
recognizer.train(faces,IDs)
While I run this code my recognizer retrain on new pictures but lose everything that had been done before. Is there a way to retrain my recognizer on new additional pictures without retraining on the old ones in order to accelerate the processing?
You need to call update:
recognizer.update(faces, IDs)
This method updates a (probably trained) FaceRecognizer, but only if the algorithm supports it. The Local Binary Patterns Histograms (LBPH) recognizer (see createLBPHFaceRecognizer) can be updated.
Related
I just finished training a Custom Azure Translate Model with a set of 10.000 sentences. I now have the options to review the result and test the data. While I already get a good result score I would like to continue training the same model with additional data sets before publishing. I cant find any information regarding this in the documentation.
The only remotely close option I can see is to duplicate the first model and add the new data sets but this would create a new model and not advance the original one.
Once the project is created, we can train with different models on different datasets. Once the dataset is uploaded and the model was trained, we cannot modify the content of the dataset or upgrade it.
https://learn.microsoft.com/en-us/azure/cognitive-services/translator/custom-translator/quickstart-build-deploy-custom-model
The above document can help you.
We use Azure Custom Vision service to detect products on store shelves. This works pretty well, but we can't understand why each subsequent iteration of training makes the forecast worse. Does the service create a model from scratch in each iteration, or does it retrain the same model?
Whether you're using the classifier or the object detector, each time you train, you create a new iteration with its own updated performance metrics.
That means that each iteration in a project is independent, built on different sets of training images.
To maintain high performance don't delete existing images from the previous iteration before retraining, because by retraining you're basically creating a new model based on the images you currently have tagged.
It is also stated in the documentation here for the classifier, and here for the object detector.
I have been using the form recognizer service and form labeller tool, using the version 2 of the api, to train my models to read a set of forms. But i have the need to use more than one layout of the forms, not knowing which form (pdf) layout is being uploaded.
Is it as simple as labelling the different layouts within the same model. Or is there another way to identify which model is to be used with which form.?
any help greatly appreciated
This is a common request. for now, if the 2 forms styles are not that different, you could try to train one model and see if that model could correctly extract key/value. Another option is to train two different forms, you could write a simple classification program to decide which model to use.
Form Recognizer team is working on a feature to allow user just submit the document and it would pick the most appropriate model to analyze the document. Please stay tuned for our update.
thanks
Is it possible to add a new face features into trained face recognition model, without retraining it with previous faces?
Currently am using facenet architecture,
Take a look in Siamese Neural Network.
Actually if you use such approach you don't need to retrain the model.
Basically you train a model to generate an embedding (a vector) that maps similar images near and different ones far.
After you have this model trainned, when you add a new face it will be far from the others but near of the samples of the same person.
basically, by the mathematics theory behind the machine learning models, you basically need to do another train iteration with only this new data...
but, in practice, those models, especially the sophisticated ones, rely on multiple iterations for training and a various technics of suffering and nose reductions
a good approach can be train of the model from previous state with a subset of the data that include the new data, for a couple of iterations
I am wondering how, if possible at all, one might train a new SyntaxNet model that uses the training data from the original, out-of-the-box "ready to parse" model included on the github page. What I want to do is add new training data to make a new model, but I don't want to make an entirely new and therefore entirely distinct model from the original Parsey McParseFace. So my new model would be trained on the data that the included model was trained on (Penn Treebank, OntoNotes, English Web Treebank), plus my new data. I don't have the money to buy from the LDC the treebanks the original model is trained on. Has anyone attempted this? Thanks very much.