I'm working on a tool for TF hub models and would like to test it on a representative sample of models. Is there a way do programmatically download a full list of TF hub models along with their download numbers (to judge popularity) other than scraping the website?
Also, I've noticed that the download numbers for some models (Ex: https://tfhub.dev/tensorflow/efficientnet/lite0/classification/2) are missing, and on a day-to-day basis, the download numbers sometimes go down. Is there any explanation for this?
There is no API to download all models or model stats. The bug regarding the decreasing download counts should be fixed by now.
Related
I'm developing a time series model to anaylize the download traffic inside my organization. Now I'm trying to find a way of automatically running this code everyday and create alerts whenever I'm finding anomalies (high download volumes), so that is not necessary to do it manually. I'd also like to create a dashboard or an easy way to visualize the plots I'm getting in this case.
It'd be something similar to workbooks but with a deeper analysis.
Thanks!
I am using Azure Face API to tell two different persons' faces.
It was easy to use thanks to the good documentation on the MicroSoft Azure API website.
But the different confidence rate between API call and the demo on the webstie: https://azure.microsoft.com/en-us/services/cognitive-services/face/#demo
My code is simple.
First I get the face ids of two uploaded images using face detection API.
And I just send two face ids to face verify API. Then I get the result of confidence rate that means the similarity of two faces.
I always get less confidence rate from my API call than the demo of the Azure website. About 20% less.
ex) I get 0.65123 on API call while I get the higher number like 0.85121 on the demo.
This is the Azure face API specifications to verity two faces:
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/face/face/verifyfacetoface
Since I got no clue why it happens. I don't resize or crop the images on uploading.
I use the exactly same images for this test.
Is it possible for MS Azure to manipulate the values for their own interests?
I wonder if anyone has the same issue? If yes, please share your experience with me.
Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to How to specify a detection model.
'detection_02': Detection model released in 2019 May with improved accuracy compared to detection_01. When you use the Face - Detect API, you can assign the model version with the detectionModel parameter. The available values are:
detection_01
detection_02
i'm using Custom vision from Microsoft service to classify image. Since the model will have to be re train few times a years, I would like to know if I can save current version of azure custom vision model to re train my new model on the same version? because I guess microsoft will try to increase performances of its service among time so model used on this tools will probably change...
You can export the model after each run, but you cannot use an existing model as a starting point for another training run.
So yes, as it is a managed service, Microsoft might optimize or somehow change the algorithms to train in the background. It is on you to decide if that works for you. If not, a managed service like this is probably generally not something you should use, but instead train your own models entirely.
We are researching Stream.io and Stream Framework.
We want to build a high-volume feed with many producers( sources) that include highly personal messages (private messages?)
For building this feed and to make this relevant for all subcribers we will need to use our own ML model for the feed personalisation.
We found this as their solution for personalisation but this might scale badly to allow us to run and develop our own ML model
https://go.getstream.io/knowledge/volumes-and-pricing/can-i
Questions :
1. How do we integrate / add our own ML model for a Getstream-io feed ?
2. SHould we move more to the Stream Framework and how do we connect our own ML model to that feed solution ?
Thanks for pointing us in the right directions !
we have the ability to work with your team to incorporate ML models into Stream. The model has to be close to the data otherwise lag is an issue. If you use the Stream Framework, you're working with python and your own instance of cassandra, which we stopped using because of performance and scalability issues. If you'd like to discuss options, you can reach out via a form on our site.
I am using Azure Cognitive Services, aka CustomVision website, to create, train and test models. I understand the main goal of this site is to create and API which can be called to run your model in production. I should mention I am using this to do object detection.
There are times when you have to support running offline (meaning you don't have a connection to Azure, etc...). I believe Microsoft knows and understands this because they have a feature which allows you to export your model in many different formats (such as TensorFlow, ONNX, etc...).
The issue I am having is particularly when you export to TensorFlow, which is what I need, it will only download the frozen model graph (model.pb). However, there are times when you need either the .pbtxt file that goes along with the model or the config file. I know you can generate a pbtxt file but for that you need the .config.
Also, there is little to no information about your model once you export it, such as what the input image size should be. I would like to see this better documented somewhere. For example, is it 300x300, etc... Without getting the config or pbtxt along with the model, you have to figure this out by loading your model into a TensorBoard or something similar to figure out the input information (size, name, etc..). Furthermore, we don't even know what the baseline of the model is, is it ResNet, SSD, etc...
So, anybody know how I can get these missing files when I export a model? Or, anybody know how you can generate a pbtxt when all you have is the frozen graph .pb file?
If not, I would recommend these as improvements for the Azure Cognitive services team. With all of this missing data or information, it is really hard to consume the exported model.
Thanks!
Many model architectures allow you to change the network input size, such as Yolo, which is the architecture exported from Custom Vision. Including a fixed input size does somewhere does not make sense in this case.
Netron will be your good friend and pretty easy to use to figure out the details about the model.
Custom Vision Service only exports compact domains.For object detection exports there is code to load and run the object detection model in the zip file downloaded(model.pb,labels.txt). Along with the the export model you will find Python code to exercise the model.