Change the threshold in "IBM Watson Visual Recognition service custom classifier" - node.js

I created a custom classifier by using this demo. Although, I trained my two class dataset, while testing (trying the classifier) for some images (test images, not presented in training images) I get the error "The score for this image is not above the threshold of 0.5 based on the training data provided". How can I change this threshold in the scripts (javascripts)?
For example, I'm ok to have classification data for images with ranks more than 0.2.

Trying to help you, first, I recommend to you read and to know the Best practices from one IBM Professional for getting one better result or accuracy using Visual Recognition.
But, talking about your question, this error is one condition inside the Project by IBM Developers, you can simple change the value in the line #L270:
//change this value
params.threshold = 0.5; //So the classifers only show images with a confindence level of 0.5 or higher
Guidelines for training your Visual Recognition Classifiers.
API Reference for Visual Recognition using Node.js

Related

Does pattern in sentence edits affect the performance of sentence correction seq2seq model

I am trying to train a seq2seq model using T5 transformer for sentence correction task. I am using StackOverflow dataset for the training and evaluation process. The dataset contains original and edited sentences extracted from StackOverflow posts.
Below are some samples:
Original
Edited
is it possible to print all reudctions in Haskell - using WinHugs
Is it possible to print all reductions in Haskell - using WinHugs
How do I pass a String into a fucntion in an NVelocty Template
How do I pass a String into a function in an NVelocity Template
Caconical term for something that can only occur once
Canonical term for something that can only occur once
When trained on samples that have a high similarity (using the longest common sequence to determine this) and are edited due to spelling correction, verb changes, and preposition changes the model is predicting good recommendation. But when I use samples that do not have high similarity the model is not predicting very accurate results. Below are some samples:
Original
Edited
For what do API providers use API keys, such as the UPS API Key
Why do some API providers require an API key
NET - Programmatic Cell Edit
NET - working with GridView Programmatically
How to use http api (pseudo REST) in C#
How to fire a GET request over a pseudo REST service in C#
I am using simpletranfromers for training T5 model based on t5-base.
Can anyone confirm that is it a limitation of seq2seq models that they can not learn much when the input and target sequences are out of pattern?

How to increase number of tested images in MS Azure Custom Vision?

I've created a project in Azure Custom Vision (Object Detection, General Compact, Tier S0). I uploaded about 70 images, 35 images per tag then started training my model.
Checked tags in the Iterations screen after training (Quick Training) was done. For my surprise, only 7 images were tested per tag.
Tried to run Advanced Training for 1 hour. Nothing has changed. Only 7 images per tag were tested.
Am I doing something wrong?
Is there a way to use all images for object detection training so it can give me a better accuracy?
Thanks,
+ftex
What you are seeing in the test interface after the training is only a part of the total images because these metrics are calculated using k-fold cross validation.
You are not doing something wrong. It would not be logic to test all the images because it would mean testing with your training images.
To have a better accuracy, there's no magic: add more images, relevant to your use-case
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier#evaluate-the-classifier

The data set being scored must contain all features used during training, missing feature(s)

I built a model in Azure ML, then i published this model as a web service. I would like to customize the input fields of the web service by updating the needed inputs for the model prediction WS.
The model has been trained on set of features to predict a price value on a given date. I want the customer to provide a date to predict the price without the need to enter features values that i supplied when i trained the model.
The error message when i customize the web service inputs by removing the unneeded columns in the predictive experiment (by adding select column module before the score module):
Error 1000: AFx Library library exception: table: The data set
being scored must contain all features used during training, missing
feature(s).
How would i fix this issue?
I had this same problem, with the below error.
AFx Library library exception: table: The data set being scored must contain all features used during training, missing feature(s).
This happened when i changed my classification algorithm to regression algorithm in the same project. I got it cleared by creating a new project with same steps and all worked perfectly fine.
I think the problem is when we change the type of algorithm, ML studio is confused.
The Score Model module needs the same input features that were used to train the model. That's a basic property of the machine learning algorithms.
Could you clarify where the feature value come from, if not from customer?
-Roope

Not getting proper result in Model Training while using Azure Machine Learning Studio with Two Class Bayes Point Machine Algorithm

We are using Azure Machine Learning Studio for building Trained Model and for that we have used Two Class Bayes Point Machine Algorithm.
For sample data , we have imported .CSV file that contains columns such as: Tweets and Label.
After deploying the web service, we got improper output.
We want our algorithm to predict the result of Label as 0 or 1 on the basis of different types tweets, that are already stored in the dataset.
While testing it with the tweets that are there in the dataset, it gives proper result, but the problem occurs while testing it with other tweets(that are not there in the dataset).
You can view our experiment over here:
Experiment
Are you planning to do a binary classification based on the textual data on tweets? If so you should try doing feature hashing before doing the classification.

How to manipulate Azure ML recommendations in published web service by changing the models threshold

The model
I have designed, trained and published an Azure ML experiment (using a two class decision jungle) as a web service and can call it fine and it returns the expected result (based on a threshold of 0.5).
The problem
However I want to manipulate the result returned to provide a result closer to my desired accuracy, precision and recall which don't happen to coincide with the default threshold of 0.5. I can easily do this via the ML studio by visualizing the evaluation results and moving the threshold slider from the center (0.5) to the left or right.
I have googled and read many Azure ML documents and tutorials but so far cannot work out how to alter the threshold and return a different scored probability in my trained and published experiment.
The score module also returns the result with scored probabilities. I think you can add a simple math operation to compare the scored probability and add a new column or write a simple R script - see the image below with "apply math operation" to generate output based on probability exceeding 0.6 instead of 0.5

Resources