How to remove a feature from a product? - sap-commerce-cloud

So I am still learning this amazing complex system folks call Hybris, or SAP Commerce, along with lots of other names :) I ran into a problem and am looking to learn how to get out of it. I have added four new classifications attributes (Utilization, Fit, Material, Function). When I went to add them to the products I added a space between them and the numeric code that came after it:
$feature1=#Utilization, 445 [$clAttrModifiers]; # Style
$feature2=#Fit, 446 [$clAttrModifiers]; # Colour
$feature3=#Material, 447 [$clAttrModifiers]; # Connections
$feature4=#Function, 448 [$clAttrModifiers]; # Function
INSERT_UPDATE Product;code[unique=true];$feature1;$feature2;$feature3;$feature4;$catalogVersion;
;300413166;my;feature;has;a space
The problem is I want to take the space out, as seen in the follow code:
$feature1=#Utilization,445 [$clAttrModifiers];# Style
$feature2=#Fit,446 [$clAttrModifiers];# Colour
$feature3=#Material,447 [$clAttrModifiers];# Connections
$feature4=#Function,448 [$clAttrModifiers];# Function
INSERT_UPDATE Product;code[unique=true];$feature1;$feature2;$feature3;$feature4;$catalogVersion;
;300413166;Bottom;Loose;Yam type;Sportswear
When I run both of these scripts together, I get 8 features:
So how do I remove the four features that have spaces in them?
How do I go about actually removing the first set of features?

Below removes the ClassAttributeAssignment records, which also removes the ProductFeature entries assigned to all products. However, you need to find the correct classification category code (e.g. clasificationCategory1) the attribute (e.g. Utilization, 445) belongs to. The classification category is the grouping/header you will find in the Attributes
$classificationCatalog=ElectronicsClassification
$classificationSystemVersion=systemVersion(catalog(id[default=$classificationCatalog]),version[default='1.0'])[unique=true,default=$classificationCatalog:1.0]
$classificationCatalogVersion=catalogversion(catalog(id[default=$classificationCatalog]),version[default='1.0'])[unique=true]
$class=classificationClass(code,$classificationCatalogVersion)[unique=true]
$attribute=classificationAttribute(code,$classificationSystemVersion)[unique=true]
REMOVE ClassAttributeAssignment[batchmode=true];$class;$attribute
;clasificationCategory1;Utilization, 445
;clasificationCategory1;Fit, 446
You can also remove the ProductFeature entries (for all products) like this, but it doesn't remove the ClassAttribute assignment.
REMOVE ProductFeature[batchmode=true];qualifier[unique=true]
;ElectronicsClassification/1.0/clasificationCategory1.Utilization, 445
Other Reference:
Classification System API: https://help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/1905/en-US/8b7ad17c86691014aa0ee2d228c56dd1.html

Related

Table values moved after being added to old QGIS version: behaves normally on current version

I am teaching a man the basics of QGis for a project he needs to do at his work. He has very little computer knowledge and would like to standardise the work as much as possible (specific workarounds would complicate it too much for him). His QGis version is 3.16 "Hannover" and as this is a work laptop he does not have permission to download a newer version.
We have been having problem with one specific table. The first few rows are below, written exactly as they are originally.
Baum-Nr. Baumart BHD Alter Y X Biotopbaum Klassifizierung Bemerkungen
1 Buche 86 120 49.1356 11.0488 A Altbaum Freistellen !!!
2 Kiefer 45 100 49.13561 11.04883 Hlb,Bs,Th Höhlenbaum
3 Kiefer 32 100 49.13571 11.04579 Hlb,Sw,Th Höhlenbaum
4 Kiefer 74 120 49.13513 11.0495 A Altbaum
After adding it from Excel to QGis through "add vector layer", the header "Klassifizierung" becomes one of the coordinates and I believe one of the columns are switched (unfortunately, I can't remember specifics. This is a small side job and I haven't had time to look into it for days. I should have taken a photo, but this isn't possible anymore). We have attempted to copy the column into a new Excel document and transferring it to QGis again, and this time the headers were shoved one cell to the right such that "Y" was placed over "X" and "Biotopbaum" over "Klassifizierung", for example.
I could not find a way to fix the import problem in his laptop. He e-mailed me the problematic table and I opened it successfully in my QGis 3.26 "Buenos Aires".
I believe this may be a problem with his QGis version, but it is curious that we only encountered it with this one table. All other tables we have worked with have the same headers and the same kind of data on their respective rows.
Is this a known problem, or have other people faced similar situations? Could someone explain what could be causing it? Would there be a way to fix it such that we can successfully import the table without having to edit it in QGis? This is not a solution the man would accept.
Thank you in advance.
Remove the commas in the Biotopbaum field or replace them with a less common delimiter. In fact, remove all punctionation (e.g., Baum-Nr. >> remove the period ".").
Also save the table into a csv format and try to import.

Extracting labels from owl ontologies when the label isn't in the ontology but can be found at the URI

Please bear with me as I am new to semantic technologies.
I am trying to use the package rdflib to extract labels from classes in ontologies. However some ontologies don't contain the labels themselves but have the URIs of classes from other ontologies. How does one extract the labels from URIs of the external ontologies?
The intuition behind my attempts center on identifying classes that don't contain labels locally (if that is the right way of putting it) and then "following" their URIs to the external ontologies to extract the labels. However the way I have implemented it does not work.
import rdflib
g = rdflib.Graph()
# I have no trouble extracting labels from this ontology:
# g.load("http://purl.obolibrary.org/obo/po.owl#")
# However, this ontology contains no labels locally:
g.load("http://www.bioassayontology.org/bao/bao_complete.owl#")
owlClass = rdflib.namespace.OWL.Class
rdfType = rdflib.namespace.RDF.type
for s in g.subjects(predicate=rdfType, object=owlClass):
# Where label is present...
if g.label(s) != '':
# Do something with label...
print(g.label(s))
# This is what I have added to try to follow the URI to the external ontology.
elif g.label(s) == '':
g2 = rdflib.Graph()
g2.parse(location=s)
# Do something with label...
print(g.label(s))
Am I taking completely the wrong approach? All help is appreciated! Thank you.
I think you can be much more efficient than this. You are trying to do a web request, remote ontology download and search every time you encounter a URI that doesn't have a label given in http://www.bioassayontology.org/bao/bao_complete.owl which is most of them and it's a very large number. So your script will take forever and thrash the web servers delivering those remote ontologies.
Looking at http://www.bioassayontology.org/bao/bao_complete.owl, I see that most of the URIs without labels there are from OBO, and perhaps a couple of other ontologies, but mostly OBO.
What you should do is download OBO once and load that with RDFlib. Then if you run your script above on the joined (union) graph of http://www.bioassayontology.org/bao/bao_complete.owl & OBO, you'll have all OBO's content at your fingertips so that g.label(s) will find a much higher proportion of labels.
Perhaps there are a couple of other source ontologies providing labels for http://www.bioassayontology.org/bao/bao_complete.owl you may need as well but my quick browsing sees only OBO.

Create automated report from web data

I have a set of multiple API's I need to source data from and need four different data categories. This data is then used for reporting purposes in Excel.
I initially created web queries in Excel, but my Laptop just crashes because there is too many querie which have to be updated. Do you guys know a smart workaround?
This is an example of the API I will source data from (40 different ones in total)
https://api.similarweb.com/SimilarWebAddon/id.priceprice.com/all
The data points I need are:
EstimatedMonthlyVisits, TopOrganicKeywords, OrganicSearchShare, TrafficSources
Any ideas how I can create an automated report which queries the above data on request?
Thanks so much.
If Excel is crashing due to the demand, and that doesn't surprise me, you should consider using Python or R for this task.
install.packages("XML")
install.packages("plyr")
install.packages("ggplot2")
install.packages("gridExtra")
require("XML")
require("plyr")
require("ggplot2")
require("gridExtra")
Next we need to set our working directory and parse the XML file as a matter of practice, so we're sure that R can access the data within the file. This is basically reading the file into R. Then, just to confirm that R knows our file is in XML, we check the class. Indeed, R is aware that it's XML.
setwd("C:/Users/Tobi/Documents/R/InformIT") #you will need to change the filepath on your machine
xmlfile=xmlParse("pubmed_sample.xml")
class(xmlfile) #"XMLInternalDocument" "XMLAbstractDocument"
Now we can begin to explore our XML. Perhaps we want to confirm that our HTTP query on Entrez pulled the correct results, just as when we query PubMed's website. We start by looking at the contents of the first node or root, PubmedArticleSet. We can also find out how many child nodes the root has and their names. This process corresponds to checking how many entries are in the XML file. The root's child nodes are all named PubmedArticle.
xmltop = xmlRoot(xmlfile) #gives content of root
class(xmltop)#"XMLInternalElementNode" "XMLInternalNode" "XMLAbstractNode"
xmlName(xmltop) #give name of node, PubmedArticleSet
xmlSize(xmltop) #how many children in node, 19
xmlName(xmltop[[1]]) #name of root's children
To see the first two entries, we can do the following.
# have a look at the content of the first child entry
xmltop[[1]]
# have a look at the content of the 2nd child entry
xmltop[[2]]
Our exploration continues by looking at subnodes of the root. As with the root node, we can list the name and size of the subnodes as well as their attributes. In this case, the subnodes are MedlineCitation and PubmedData.
#Root Node's children
xmlSize(xmltop[[1]]) #number of nodes in each child
xmlSApply(xmltop[[1]], xmlName) #name(s)
xmlSApply(xmltop[[1]], xmlAttrs) #attribute(s)
xmlSApply(xmltop[[1]], xmlSize) #size
We can also separate each of the 19 entries by these subnodes. Here we do so for the first and second entries:
#take a look at the MedlineCitation subnode of 1st child
xmltop[[1]][[1]]
#take a look at the PubmedData subnode of 1st child
xmltop[[1]][[2]]
#subnodes of 2nd child
xmltop[[2]][[1]]
xmltop[[2]][[2]]
The separation of entries is really just us, indexing into the tree structure of the XML. We can continue to do this until we exhaust a path—or, in XML terminology, reach the end of the branch. We can do this via the numbers of the child nodes or their actual names:
#we can keep going till we reach the end of a branch
xmltop[[1]][[1]][[5]][[2]] #title of first article
xmltop[['PubmedArticle']][['MedlineCitation']][['Article']][['ArticleTitle']] #same command, but more readable
Finally, we can transform the XML into a more familiar structure—a dataframe. Our command completes with errors due to non-uniform formatting of data and nodes. So we must check that all the data from the XML is properly inputted into our dataframe. Indeed, there are duplicate rows, due to the creation of separate rows for tag attributes. For instance, the ELocationID node has two attributes, ValidYN and EIDType. Take the time to note how the duplicates arise from this separation.
#Turning XML into a dataframe
Madhu2012=ldply(xmlToList("pubmed_sample.xml"), data.frame) #completes with errors: "row names were found from a short variable and have been discarded"
View(Madhu2012) #for easy checking that the data is properly formatted
Madhu2012.Clean=Madhu2012[Madhu2012[25]=='Y',] #gets rid of duplicated rows
Here is a link that should help you get started.
http://www.informit.com/articles/article.aspx?p=2215520
If you have never used R before, it will take a little getting used to, but it's worth it. I've been using it for a few years now and when compared to Excel, I have seen R perform anywhere from a couple hundred percent faster to many thousands of percent faster than Excel. Good luck.

Finding Related Topics using Google Knowledge Graph API

I'm currently working on a behavioral targeting application and I need a considerably large keyword database/tool/provider that enables applications to reach to the similar keywords via given keyword for my app. I've recently found that Freebase, which had been providing a similar service before Google acquired them and then integrated to their Knowledge Graph. I was wondering if it's possible to have a list of related topics/keywords for the given entity.
import json
import urllib
api_key = 'API_KEY_HERE'
query = 'Yoga'
service_url = 'https://kgsearch.googleapis.com/v1/entities:search'
params = {
'query': query,
'limit': 10,
'indent': True,
'key': api_key,
}
url = service_url + '?' + urllib.urlencode(params)
response = json.loads(urllib.urlopen(url).read())
for element in response['itemListElement']:
print element['result']['name'] + ' (' + str(element['resultScore']) + ')'
The script above returns the queries below, though I'd like to receive related topics to yoga, such as health, fitness, gym and so on, rather than the things that has the word "Yoga" in their name.
Yoga Sutras of Patanjali (71.245544)
Yōga, Tokyo (28.808222)
Sri Aurobindo (28.727333)
Yoga Vasistha (28.637642)
Yoga Hosers (28.253984)
Yoga Lin (27.524054)
Patanjali (27.061115)
Yoga Journal (26.635073)
Kripalu Center (26.074436)
Yōga Station (25.10318)
I'd really appreciate any suggestions, and I'm also open to using any other API if there is any that I could make use of. Cheers.
See your point:) So here's the script I use for that using Serpstat's API. Here's how it works:
Script collects the keywords from Serpstat's database
Then, collects search suggestions from Serpstat's database
Finally, collects search suggestions from Google's suggestions
Note that to make script work correctly, it's preferable to fill all input boxes. But not all of them are required.
Keyword — required keyword
Search Engine — a search engine for which the analysis will be carried out. For example, for the US Google, you need to set the g_us. The entire list of available search engines can be found here.
Limit the maximum number of phrases from the organic issue, which will participate in the analysis. You cannot set more than 1000 here.
Default keys — list of two-word keywords. You should give each of them some "weight" to receive some kind of result if something goes wrong.
Format: type, keyword, "weight". Every keyword should be written from a new line.
Types:
w — one word
p — two words
Examples:
"w; bottle; 50" — initial weight of word bottle is 50.
"p; plastic bottle; 30" — initial weight of phrase plastic bottle is 30.
"w; plastic bottle; 20" — incorrect. You cannot use a two-word phrase for the "w" type.
Bad words — comma-separated list of words you want the script to exclude from the results.
Token — here you need to enter your token for API access. It can be found on your profile page.
You can download the source code for script here

2 Sequential Transactions, setting Detail Number (Revit API / Python)

Currently, I made a tool to rename view numbers (“Detail Number”) on a sheet based on their location on the sheet. Where this is breaking is the transactions. Im trying to do two transactions sequentially in Revit Python Shell. I also did this originally in dynamo, and that had a similar fail , so I know its something to do with transactions.
Transaction #1: Add a suffix (“-x”) to each detail number to ensure the new numbers won’t conflict (1 will be 1-x, 4 will be 4-x, etc)
Transaction #2: Change detail numbers with calculated new number based on viewport location (1-x will be 3, 4-x will be 2, etc)
Better visual explanation here: https://www.docdroid.net/EP1K9Di/161115-viewport-diagram-.pdf.html
Py File here: http://pastebin.com/7PyWA0gV
Attached is the python file, but essentially what im trying to do is:
# <---- Make unique numbers
t = Transaction(doc, 'Rename Detail Numbers')
t.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",getParam(viewport,"Detail Number")+"x")
t.Commit()
# <---- Do the thang
t2 = Transaction(doc, 'Rename Detail Numbers')
t2.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",detailViewNumberData[i])
t2.Commit()
Attached is py file
As I explained in my answer to your comment in the Revit API discussion forum, the behaviour you describe may well be caused by a need to regenerate between the transactions. The first modification does something, and the model needs to be regenerated before the modifications take full effect and are reflected in the parameter values that you query in the second transaction. You are accessing stale data. The Building Coder provides all the nitty gritty details and numerous examples on the need to regenerate.
Summary of this entire thread including both problems addressed:
http://thebuildingcoder.typepad.com/blog/2016/12/need-for-regen-and-parameter-display-name-confusion.html
So this issue actually had nothing to do with transactions or doc regeneration. I discovered (with some help :) ), that the problem lied in how I was setting/getting the parameter. "Detail Number", like a lot of parameters, has duplicate versions that share the same descriptive param Name in a viewport element.
Apparently the reason for this might be legacy issues, though im not sure. Thus, when I was trying to get/set detail number, it was somehow grabbing the incorrect read-only parameter occasionally, one that is called "VIEWER_DETAIL_NUMBER" as its builtIn Enumeration. The correct one is called "VIEWPORT_DETAIL_NUMBER". This was happening because I was trying to get the param just by passing the descriptive param name "Detail Number".Revising how i get/set parameters via builtIn enum resolved this issue. See images below.
Please see pdf for visual explanation: https://www.docdroid.net/WbAHBGj/161206-detail-number.pdf.html

Resources