I am able to insert values into productfeature, but these are not coming under classification features, instead those productfeatures available in unclassified features list by using below impex.
INSERT_UPDATE ProductFeature;classificationAttributeAssignment; product(code)[unique=true];qualifier;
value[translator=de.hybris.platform.catalog.jalo.classification.impex.ProductFeatureValueTranslator];
;product number;1008525794;product number;product number,HPE ProLiant ML10 Gen9 E3-1225;
The classificationAttributeAssignment header expects a PK so I don't see how your impex would work. You should create a classAttributeAssignment it defines a field classificationClass in which you define the categorization for your feature(so your feature belongs somewhere so it does not float around).
For instance:
insert_update ClassAttributeAssignment;attributeType(itemtype(code),code)[unique=true];classificationAttribute(code,systemVersion(catalog(id),version))[forceWrite=true,allownull=true,unique=true];classificationClass(catalogVersion(catalog(id),version),code)[forceWrite=true,allownull=true,unique=true];comparable[allownull=true];description[lang=en];formatDefinition;listable[allownull=true];localized[allownull=true];mandatory[allownull=true];multiValued[allownull=true];position;range[allownull=true];searchable[allownull=true];systemVersion(catalog(id),version)[forceWrite=true,unique=true];unit(code,systemVersion(catalog(id),version))[unique=true];visibility(itemtype(code),code)
;ClassificationAttributeTypeEnum:boolean;IsCool:ExampleClassification:1.0;ExampleClassification:1.0:Search;TRUE;;;FALSE;FALSE;FALSE;FALSE;1;FALSE;FALSE;ExampleClassification:1.0;;ClassificationAttributeVisibilityEnum:visible
ClassificationAttribute (the feature itself): IsCool
SystemVersion (catalog for your classification classes): ExampleClassification:1.0
ClassificationClass (category bundeling the features): Search
So providing you created (for instance in the hMC) the ExampleClassification and gave it the version 1.0(catalog->classification system) , the IsCool feature descriptor (feature list->feature descriptor) and in the ExampleClassification you created the Search classification class. If you assign the classification class to your product(supercategories), you should see the IsCool feature displayed for your product.
And assigning the value to the product with sku 100:
insert_update Product;code[unique=true,allownull=true];catalog(id)[allownull=true];catalogVersion(catalog(id),version)[unique=true];#IsCool[system='ExampleClassification',version='1.0',translator=de.hybris.platform.catalog.jalo.classification.impex.ClassificationAttributeTranslator]
;100;Default;Default:Staged;TRUE
Or you can also assign the value using the ProductFeature:
INSERT_UPDATE ProductFeature;classificationattributeAssignment(classificationAttribute(code,systemVersion(catalog(id),version)),systemVersion(catalog(id),version),classificationClass(catalogVersion(catalog(id),version),code))[unique=true]; product(catalogVersion(catalog(id),version),code)[unique=true];value[translator=de.hybris.platform.catalog.jalo.classification.impex.ProductFeatureValueTranslator]
;IsCool:ExampleClassification:1.0:ExampleClassification:1.0:ExampleClassification:1.0:Search;Default:Staged:100;boolean,TRUE
Related
Azure Machine Learning Service's Model Artifact has the ability to store references to the Datasets associated with the model. We can use azureml.core.model.Model.add_dataset_references([('relation-as-a-string', Dataset)]) to add these dataset references.
How do we retrieve a Dataset from the references stored in this Model class by using a reference to the Model Class?
get_by_name(workspace, name, version='latest')
Parameters
workspace
The existing AzureML workspace in which the Dataset was registered.
name
The registration name.
version
The registration version. Defaults to 'latest'.
Returns
The registered dataset object.
Consider that a Dataset was added as a reference to a Model with the name 'training_dataset'
In order to get a reference to this Dataset we use:
model = Model(workspace, name)
dataset_id = next(dictionary['id'] for dictionary in model.serialize()['datasets'] if dictionary["name"] == 'training_dataset')
dataset_reference = Dataset.get_by_id(workspace, dataset_id )
After this step we can use dataset_reference as any other AzureML Dataset Class object.
I have a Cosmos DB with a container that contains document with varying structure.
I am using the Java SQL API for reading the documents from this container.
The issue I am having is that the API methods for querying/reading the container expects a model class as input param and will return instances of the model class. Because my container contains documents that have varying fields and depth, it is not possible for me to create a model class to represent this.
I need to be able to read/query the documents and then parse it myself and extract the values that I am looking for.
Any ideas? I have used "Object" in the API methods for e.g. queryItem and then it returns a LinkedHashMap that I can parse myself. Is this the way to do it? It looks a bit "raw" but I have not found a better way.
Below is a typical example from the SDK doc. I cannot create a "Family" model class in my code, because the structure can vary from document to document - both which fields are stored and the depth.
private void queryItems() {
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
queryOptions.setQueryMetricsEnabled(true);
CosmosPagedIterable<Family> familiesPagedIterable = container.queryItems(
"SELECT * FROM Family WHERE Family.lastName IN ('Andersen', 'Wakefield', 'Johnson')", queryOptions, Family.class);
familiesPagedIterable.iterableByPage(10).forEach(cosmosItemPropertiesFeedResponse -> {
logger.info("Got a page of query result with {} items(s) and request charge of {}",
cosmosItemPropertiesFeedResponse.getResults().size(), cosmosItemPropertiesFeedResponse.getRequestCharge());
logger.info("Item Ids {}", cosmosItemPropertiesFeedResponse
.getResults()
.stream()
.map(Family::getId)
.collect(Collectors.toList()));
});
}
Per my understanding, it's determined by the sdk funtion's input parameters and output data type. And exactly, we can find that both sample code for java or spring are depends on the data model. So it's really good for you to use Object in your code because of the various documents.
And it's true that we can't design a data model to contain all the properties in the documents but I think it's also a good idea to set a model which contains all the properties required. I mean that maybe you have a useless property in a query, so the query model should exclude it.
I think I found the proper solution:
Create model class. Define the members with unknown depth and structure as JsonNode.
Then the model class could be used and the values of the JsonNode accessed using nice methods.
I have been adding some collections to a Flask app using mongoengine models and the code works fine.
I was inspecting the data using MongoCompass and just noticed that one of the collections is named notify_destination which is NOT the name I used or its lowercase version.
My model class is NotifyDestination and there is no meta tag - so why the underscore in the middle of the collection name?
class NotifyDestination(me.Document):
owner_id = me.ObjectIdField()
username = me.StringField()
Mongoengine documentation (2.3.4) just says
The name of the collection is by default the name of the class, converted to lowercase.
Is insertion of the underscore normal behavior of MongoEngine because of my using UpperCamelCase?
I still have time to specify & force a name using a meta = {} tag in the model if this behavior is not officially documented somewhere else.
(mongoengine contributor here) Yes, this is the default behavior, the doc is imprecise but it's basically converting from UpperCamelCase to snake_case.
This means that
class User(Document):
# will have "user" as default collection name
class MyCompanyUser(Document):
# will have "my_company_user" as default collection name
class USACompany(Document):
# will have "u_s_a_company" as default collection name
Note that the doc was fixed.
In my project, I have 2 asset namespaces
namespace org.example.grid
namespace org.example.workload
both of them uses a abstract structure called metrics, I want to create 1 concept in a separate file and have both the assets use this concept.
So I made a file like this:
namespace org.example.concepts
concept Metrics {
o Integer metric1
o Integer metric2
o Integer metric3
}
Then I try to include the Metric concept to asset like so:
namespace org.example.grid
import org.example.concepts.Metrics
asset Grid identified by gridId {
o String gridId
o Metrics capacity
}
However, when trying to create a new grid asset, I get this error:
Error: transaction returned with failure: TypeNotFoundException: Type Metrics is not defined in namespace org.example.grid
Are concept imports not supported? Or is there a proper way to do this?
As per my understanding, I run your code. It successfully gives me an OutPut.
1) 1st model file org.example.cocepts
2) 2nd Model file org.example.workload
3) 3rd model file org.example.grid which contain Grid asset and I import org.example.concepts file which having Metrics concept.
4) Successfully created a Grid Asset.
Hope you will find an error in your structure. :)
I have to following model:
class publication_type(models.Model):
name = models.CharField(max_length=200)
description = models.TextField(blank=True,null=True,default=None)
And I would like to add a few "default" entrys to this model, that I can always refer to without assuming whether they exist yet or not, for example:
publication_type.objects.create(id=1,name="article",description="...")
Where would be the best position in the django code to position to put this code?