Help needed configuring CruiseControl.NET statistics merge - cruisecontrol.net

I am looking for a pointer as to how to get statistics out of a merge xml file. The file structure looks like this ..
<CyclometricComplexity>
<module name="Srvr" type="unit" total="14" low="14" medium="0" high="0" ultra="0"/>
</CyclometricComplexity>
I have created a merge publisher to pick up this file, but cannot configure the statistics publisher to pick up values for total, low, medium, high and ultra.
Does anyone have an example they can point me at to help me out?
Thanks

I think I found how to do this. First I needed to understand how XPath works! Then I changed my output tool to create a summary of the entire project, rather than trying to get CCNET to aggregate them together, so the out put now has total, low, medium, etc. for the entire project. Then I changed my statistics section to be as follows ...
<firstMatch name='Total Methods' generateGraph='true' xpath='//CyclometricComplexity/#total'/>
<firstMatch name='Low Complexity' generateGraph='true' xpath='//CyclometricComplexity/#low'/>
<firstMatch name='Medium Complexity' generateGraph='true' xpath='//CyclometricComplexity/#medium'/>
<firstMatch name='High Complexity' generateGraph='true' xpath='//CyclometricComplexity/#high'/>
<firstMatch name='Ultra Complexity' generateGraph='true' xpath='//CyclometricComplexity/#ultra'/>
The stats are now showing in the detailed statistics, and I need to now start asking questions about how to do bespoke graphs!

Related

Create automated report from web data

I have a set of multiple API's I need to source data from and need four different data categories. This data is then used for reporting purposes in Excel.
I initially created web queries in Excel, but my Laptop just crashes because there is too many querie which have to be updated. Do you guys know a smart workaround?
This is an example of the API I will source data from (40 different ones in total)
https://api.similarweb.com/SimilarWebAddon/id.priceprice.com/all
The data points I need are:
EstimatedMonthlyVisits, TopOrganicKeywords, OrganicSearchShare, TrafficSources
Any ideas how I can create an automated report which queries the above data on request?
Thanks so much.
If Excel is crashing due to the demand, and that doesn't surprise me, you should consider using Python or R for this task.
install.packages("XML")
install.packages("plyr")
install.packages("ggplot2")
install.packages("gridExtra")
require("XML")
require("plyr")
require("ggplot2")
require("gridExtra")
Next we need to set our working directory and parse the XML file as a matter of practice, so we're sure that R can access the data within the file. This is basically reading the file into R. Then, just to confirm that R knows our file is in XML, we check the class. Indeed, R is aware that it's XML.
setwd("C:/Users/Tobi/Documents/R/InformIT") #you will need to change the filepath on your machine
xmlfile=xmlParse("pubmed_sample.xml")
class(xmlfile) #"XMLInternalDocument" "XMLAbstractDocument"
Now we can begin to explore our XML. Perhaps we want to confirm that our HTTP query on Entrez pulled the correct results, just as when we query PubMed's website. We start by looking at the contents of the first node or root, PubmedArticleSet. We can also find out how many child nodes the root has and their names. This process corresponds to checking how many entries are in the XML file. The root's child nodes are all named PubmedArticle.
xmltop = xmlRoot(xmlfile) #gives content of root
class(xmltop)#"XMLInternalElementNode" "XMLInternalNode" "XMLAbstractNode"
xmlName(xmltop) #give name of node, PubmedArticleSet
xmlSize(xmltop) #how many children in node, 19
xmlName(xmltop[[1]]) #name of root's children
To see the first two entries, we can do the following.
# have a look at the content of the first child entry
xmltop[[1]]
# have a look at the content of the 2nd child entry
xmltop[[2]]
Our exploration continues by looking at subnodes of the root. As with the root node, we can list the name and size of the subnodes as well as their attributes. In this case, the subnodes are MedlineCitation and PubmedData.
#Root Node's children
xmlSize(xmltop[[1]]) #number of nodes in each child
xmlSApply(xmltop[[1]], xmlName) #name(s)
xmlSApply(xmltop[[1]], xmlAttrs) #attribute(s)
xmlSApply(xmltop[[1]], xmlSize) #size
We can also separate each of the 19 entries by these subnodes. Here we do so for the first and second entries:
#take a look at the MedlineCitation subnode of 1st child
xmltop[[1]][[1]]
#take a look at the PubmedData subnode of 1st child
xmltop[[1]][[2]]
#subnodes of 2nd child
xmltop[[2]][[1]]
xmltop[[2]][[2]]
The separation of entries is really just us, indexing into the tree structure of the XML. We can continue to do this until we exhaust a path—or, in XML terminology, reach the end of the branch. We can do this via the numbers of the child nodes or their actual names:
#we can keep going till we reach the end of a branch
xmltop[[1]][[1]][[5]][[2]] #title of first article
xmltop[['PubmedArticle']][['MedlineCitation']][['Article']][['ArticleTitle']] #same command, but more readable
Finally, we can transform the XML into a more familiar structure—a dataframe. Our command completes with errors due to non-uniform formatting of data and nodes. So we must check that all the data from the XML is properly inputted into our dataframe. Indeed, there are duplicate rows, due to the creation of separate rows for tag attributes. For instance, the ELocationID node has two attributes, ValidYN and EIDType. Take the time to note how the duplicates arise from this separation.
#Turning XML into a dataframe
Madhu2012=ldply(xmlToList("pubmed_sample.xml"), data.frame) #completes with errors: "row names were found from a short variable and have been discarded"
View(Madhu2012) #for easy checking that the data is properly formatted
Madhu2012.Clean=Madhu2012[Madhu2012[25]=='Y',] #gets rid of duplicated rows
Here is a link that should help you get started.
http://www.informit.com/articles/article.aspx?p=2215520
If you have never used R before, it will take a little getting used to, but it's worth it. I've been using it for a few years now and when compared to Excel, I have seen R perform anywhere from a couple hundred percent faster to many thousands of percent faster than Excel. Good luck.

2 Sequential Transactions, setting Detail Number (Revit API / Python)

Currently, I made a tool to rename view numbers (“Detail Number”) on a sheet based on their location on the sheet. Where this is breaking is the transactions. Im trying to do two transactions sequentially in Revit Python Shell. I also did this originally in dynamo, and that had a similar fail , so I know its something to do with transactions.
Transaction #1: Add a suffix (“-x”) to each detail number to ensure the new numbers won’t conflict (1 will be 1-x, 4 will be 4-x, etc)
Transaction #2: Change detail numbers with calculated new number based on viewport location (1-x will be 3, 4-x will be 2, etc)
Better visual explanation here: https://www.docdroid.net/EP1K9Di/161115-viewport-diagram-.pdf.html
Py File here: http://pastebin.com/7PyWA0gV
Attached is the python file, but essentially what im trying to do is:
# <---- Make unique numbers
t = Transaction(doc, 'Rename Detail Numbers')
t.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",getParam(viewport,"Detail Number")+"x")
t.Commit()
# <---- Do the thang
t2 = Transaction(doc, 'Rename Detail Numbers')
t2.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",detailViewNumberData[i])
t2.Commit()
Attached is py file
As I explained in my answer to your comment in the Revit API discussion forum, the behaviour you describe may well be caused by a need to regenerate between the transactions. The first modification does something, and the model needs to be regenerated before the modifications take full effect and are reflected in the parameter values that you query in the second transaction. You are accessing stale data. The Building Coder provides all the nitty gritty details and numerous examples on the need to regenerate.
Summary of this entire thread including both problems addressed:
http://thebuildingcoder.typepad.com/blog/2016/12/need-for-regen-and-parameter-display-name-confusion.html
So this issue actually had nothing to do with transactions or doc regeneration. I discovered (with some help :) ), that the problem lied in how I was setting/getting the parameter. "Detail Number", like a lot of parameters, has duplicate versions that share the same descriptive param Name in a viewport element.
Apparently the reason for this might be legacy issues, though im not sure. Thus, when I was trying to get/set detail number, it was somehow grabbing the incorrect read-only parameter occasionally, one that is called "VIEWER_DETAIL_NUMBER" as its builtIn Enumeration. The correct one is called "VIEWPORT_DETAIL_NUMBER". This was happening because I was trying to get the param just by passing the descriptive param name "Detail Number".Revising how i get/set parameters via builtIn enum resolved this issue. See images below.
Please see pdf for visual explanation: https://www.docdroid.net/WbAHBGj/161206-detail-number.pdf.html

Cross Object References Workfront text editor

Hello I am attempting to link data that is connected to a task on a project report using the text editor.
So far I have this as my code:
displayname=Recvd Medical Rates
linkedname=project:tasks
namekey=DE:Documentation Received Date
querysort=project:tasks:Document - Medical Rates:Documentation Received Date
textmode=true
valuefield=Documentation Received Date
valueformat=customDateAsString
I need to display data from a specific task within a specific custom form on a project report. I know there is no standard method of linking a project with a task, but the relationship is there and from my research It seems possible. I believe that I do not have the correct syntax.
Can somebody please help me with this. I have tried all types of combinations, I even tried adding the aggregator:
aggregator.displayformat=customDateAsString
aggregator.function=MIN
aggregator.namekey=Documentation Received Date
aggregator.valuefield=DE:Documentation Received Date
aggregator.valueformat=customDateAsAtDate
Either way I try and link the information the actual entered data will not display. So far the report knows that it is a date field; I know this because I am able click into the field on the project report and choose a date, but the date will not remain selected once I have chosen it leading me to believe that the field is somehow linked, but done incorrectly.
Please help.
The answer is below:
displayname=Plans and Benefits Received
listdelimiter=
listmethod=nested(tasks).lists
textmode=true
type=iterate
valueexpression=IF(CONTAINS("Plans and Benefits",{name}),{actualCompletionDate})
valueformat=HTML
So how does it work? see below.
displayname=Plans and Benefits Received
^display name that you want^
listdelimiter=
^decides delimiter^
listmethod=nested(tasks).lists
^Calls the nested or sub task^
textmode=true
^allows text editor mode to function^
type=iterate
^make data display as an iteration of original^
valueexpression=IF(CONTAINS("Plans and Benefits",{name}),{actualCompletionDate})
^determines where the data is being pulled from^
valueformat=HTML
^Format set to HTML^

how to add a digital signature to an xlsx file

I am trying to add a digital signature to an xlsx file... Can't seem to find any resources for this (other than adding signatures to literal/regular xml files). Is this possible with docx4j? I see it includes jaxb-xmldsig but there are no samples that I could find. Perhaps someone could point me in the right direction?
EDIT: Per Jason, I looked at the differences via the demo webapp....
There are two new entries in [Content_Types].xml:
<Default Extension="sigs"
ContentType="application/vnd.openxmlformats-package.digital-signature-origin"/>
<Override ContentType="application/vnd.openxmlformats-package.digital-signature-xmlsignature+xml" PartName="/_xmlsignatures/sig1.xml"/>
Two new parts within a new top level directory (_xmlsignatures):
/_xmlsignatures/origin.sigs
/_xmlsignatures/sig1.xml
There is also a _rels directory within _xmlsignatures which contains a single file origin.sigs.rels. I can post more info if that will be helpful.
It is not the DigSig from the extended properties ?
see :
http://www.schemacentral.com/sc/ooxml/e-extended-properties_Properties.html
see :
http://msdn.microsoft.com/en-us/library/documentformat.openxml.extendedproperties.properties_members(v=office.14).aspx (DigitalSignature)
see : docx4j\xsd\docProps\shared-documentPropertiesExtended.xsd
Digital Signature > contains a binary Blob
If it is, you can add the DigSig to the propertie by editing the extendPropertie
DocPropsExtendedPart docPropsExtPart = wordMLPackage.getDocPropsExtendedPart();
Properties extProp = docPropsExtPart.getContents();
ExtendedProperties.modifyProp(props.getExtendedProperties(), extProp);
wordMLPackage.setPartShortcut(docPropsExtPart, Namespaces.PROPERTIES_EXTENDED);

How to create XML in Google custom search Autocomplete?

I try to use the structure at https://developers.google.com/custom-search/docs/queries but it can't upload successfully.
<Autocompletions>
<Autocompletion term="cake" type="1" language=""/>
<Autocompletion term="strawberry.*" type="2" match="2" language=""/>
<Autocompletion term="vanilla" type="2" language=""/>
<Autocompletion term="apple" type="3" language="">
<Promotion id="1" queries="dessert" title="Apple pie for dessert!" url="http://www.example.com/applepieforsale"
start_date="" end_date="" image_url="" description="Apple pie is the best dessert ever!"/>
<Promotion id="2" queries="apple" title="Buy Apple pie" url="http://www.example.com/applepieforsale"
start_date="" end_date="" image_url="" description="We stock the best apple pie in the world, right here."/>
</Autocompletion>
</Autocompletions>
My website is using wordpress; I get the title of posts and want to use it as string query for google custom search.
Please help me figure out how to create XML for autocomplete.
It's not liking the description attributes. It you remove those, then it will work.
Leave all terms in XML like that:
<Autocompletion term="%term-title%" type="%type%" language=""/>
Be careful with the length of the terms, I found that if it is 3 or less words, Google will accept it fine. There is also no big restriction on ammount of entries you're adding (I added 9000 entries in one go). And I added them in Russian, so there are no problems with encoding either.
The final format should be like that:
<Autocompletions>
<Autocompletion term="%term-title%" type="%type%" language=""/>
...
<Autocompletion term="%term-title%" type="%type%" language=""/>
</Autocompletions>
Not terribly well documented by Google, so you have to waste time reading this. Hope it helps :)

Resources