I am trying to train a reinforcement learning model for collision avoidance using stable baselines. I am using a custom policy network. I have trained several models and they have converged to a stable reward, but it seems that the policy is input (observation space) independent meaning it takes fixed action irrespective of what observations it gets. I need to debug this issue, I guess checking the change in the parameters of the policy network might help. Does anyone know how I can debug the issue or how can I check the change in the parameters of the custom policy?
Related
i'm using Custom vision from Microsoft service to classify image. Since the model will have to be re train few times a years, I would like to know if I can save current version of azure custom vision model to re train my new model on the same version? because I guess microsoft will try to increase performances of its service among time so model used on this tools will probably change...
You can export the model after each run, but you cannot use an existing model as a starting point for another training run.
So yes, as it is a managed service, Microsoft might optimize or somehow change the algorithms to train in the background. It is on you to decide if that works for you. If not, a managed service like this is probably generally not something you should use, but instead train your own models entirely.
I am using Azure Cognitive Services, aka CustomVision website, to create, train and test models. I understand the main goal of this site is to create and API which can be called to run your model in production. I should mention I am using this to do object detection.
There are times when you have to support running offline (meaning you don't have a connection to Azure, etc...). I believe Microsoft knows and understands this because they have a feature which allows you to export your model in many different formats (such as TensorFlow, ONNX, etc...).
The issue I am having is particularly when you export to TensorFlow, which is what I need, it will only download the frozen model graph (model.pb). However, there are times when you need either the .pbtxt file that goes along with the model or the config file. I know you can generate a pbtxt file but for that you need the .config.
Also, there is little to no information about your model once you export it, such as what the input image size should be. I would like to see this better documented somewhere. For example, is it 300x300, etc... Without getting the config or pbtxt along with the model, you have to figure this out by loading your model into a TensorBoard or something similar to figure out the input information (size, name, etc..). Furthermore, we don't even know what the baseline of the model is, is it ResNet, SSD, etc...
So, anybody know how I can get these missing files when I export a model? Or, anybody know how you can generate a pbtxt when all you have is the frozen graph .pb file?
If not, I would recommend these as improvements for the Azure Cognitive services team. With all of this missing data or information, it is really hard to consume the exported model.
Thanks!
Many model architectures allow you to change the network input size, such as Yolo, which is the architecture exported from Custom Vision. Including a fixed input size does somewhere does not make sense in this case.
Netron will be your good friend and pretty easy to use to figure out the details about the model.
Custom Vision Service only exports compact domains.For object detection exports there is code to load and run the object detection model in the zip file downloaded(model.pb,labels.txt). Along with the the export model you will find Python code to exercise the model.
We are building a platform that aims to deliver machine learning solutions for large enterprises with significant data security concerns. All data training is done on premise with restrictions on the nature of data used for model training. Once the model is completed, I am looking to deploy this on cloud with standard security/ audit standards.(IP whitelists, access tokens, logs)
I believe the features can be completed anonymized (normalized, PCA etc) to provide an additional layer of security. Is there any way the data sent to the cloud-based ML model can lead back to the original data?
While I had reviewed other questions around model deployment, this aspect of security isn't handled specifically.
https://dzone.com/articles/security-attacks-analysis-of-machine-learning-mode
(concern is not on availability or model distortion- but more around confidential data)
Again, idea is to retain learning and data on premise and only the deployment on cloud for speed, flexibility and availability.
Is there any way the data sent to the cloud-based ML model can lead back to the original data?
Any function that has an inverse, can lead back to the original data. The risk is not just from a random person viewing the data, but an insider threat within the team. Here is an example:
How to reverse PCA and reconstruct original variables from several principal components?
Depending on the number of principal components, it may also be possible to brute-force guess the Eigenvectors.
Whats the difference between a Jalo layer and a service layer in the Hybris commerce suite? I will really appreciate if someone could give an example along with. I know Jalo layer has been deprecated but still if I have to specify which layer to use in my platform then where will I tell Hybris or how will I tell Hybris to use a specific layer?
I think it's best if you read up on the quite good hybris wiki regarding both:
Jalo: https://wiki.hybris.com/display/release5/Jalo+Layer
Service layer: https://wiki.hybris.com/display/release5/ServiceLayer
You won't have to specify which one you use (they are both always running) and if you start a new project you basically must (or at least really really should!) use the service layer exclusively as Jalo will go away (so they say at least for quite some time) in one of the next major releases.
In a nutshell, Jalo is the old persistence mechanism while service layer was introduced to address various problems the jalo layer had (performance/caching, extensibility, etc etc).
So if you will be only/mostly working on new projects you probably won't have to acquire too much knowledge about the jalo layer, but if you plan on becoming a hybris consultant or work on old legacy hybris code you will have to deal with Jalo more.
A small example:
In your items.xml files (where you declare your data model) you can specify a jaloclass attribute which while make the platform create a Java class for you.
E.g.: core-items.xml has Product declared with jaloclass="de.hybris.platform.jalo.product.Product".
The platform automatically also creates the respective servicelayer class (always called *Model.java, so e.g. de.hybris.platform.core.model.product.ProductModel.
One limitation of the jalo layer is e.g. that if you want to extend the Product item type in one of your own extensions with some attribute, the newly created attribute will not be at the Product jalo class (as it resides in the platform and is created only once), but instead it will be available on your extensions Manager class which is a bit unintuitive and cumbersome. The service layer creates all of its model classes only after analyzing and merging all registered extensions and therefore is able to add that attribute at the actual ProductModel class.
There are many more differences, so if you have more concrete questions feel free to ask them :)
In the past, persistence and business logic was written in the Jalo Layer. After introducing the Service Layer, the existing business logic in Jalo Layer is being moved to the Service Layer. With this, the first goal of the migration to the Service Layer is that all Jalo related classes should not contain any code.
As the Jalo Layer should not contain business logic anymore, the public API will be much smaller in the future. It will mainly consist of the means to query flexible searches and a generic way to save and remove data. This functionality is already provided in the Service Layer by adapter services like FlexibleSearchService and ModelService. In this case, any access to the Jalo Layer is no longer encouraged. The second goal is to eliminate all Jalo access in existing classes of the Service Layer.
source :
Visit https://wiki.hybris.com/pages/viewpage.action?spaceKey=release5&title=Transitioning+to+the+ServiceLayer
In first Hybris versions Logic was attached to generated item type classes trough the Jalo (Jakarta Logic) layer, in order to be more flexible Hybris is now moving everything to the more flexible approach of a service layer (not finished yet, promotions are a good example of legacy Jalo layer).
Based on after reading of the above answers and did one practice based on the first answer my conclusion is following:
Yes, JALO's non-abstract class implementation is moved as *Model.java for writing more specific business logic including the good explanation in first 2 answers.
Cheers,
This is probably a wrong way of doing it, but I was exploring this option only because I do not know how to implement the right solution.
We have a layer to which features are added using WFS-T. We have configured Geoserver to authenticate and authorize via LDAP.
While querying for features, we would like Geoserver to return features based on the user/role.
Since I do not know how to set feature based security (row level security), my thought was to see if we can make the layer write only and not allow any read operation.
The read will be done through a SQL Parametric View layer will add a WHERE cause to filter by an unique value.
For doing that, I have this setting in layers.properties workspace.layer.w=ROLE_USER workspace.layer.r=SUPERUSER
However this doesn’t seem to work and I am not able to do any WFS-T on the layer although this user has the correct role. Reading s
What would be the right strategy to implement this? Thanks in advance.
I didn't try it myself, but if you are using one of the latest version of geoserver you should be able to use geofence plugin from geosolutions.
AFAIK you can configure cql filters on the layer to limit the set of rows returned to the user based on the user autorization.