In my mule flow I have one value which needs to be found in an excel file and corresponding values related that value also needs to fetched.
It's some sort multiple value retrieval based on one value in mule from excel file.
If you are running a recent version of mule you could take advantage of DataWeave, in fact compared to the old datamapper it will offer why more power when coming to perform particular logic while performing the mappings.
One example that you can find on mule documentation is that you can extract record based on conditions while mapping:
%dw 1.0
%output application/xml
---
users: payload.users.name[?($ == "Mariano")]
IF instead you are still on older versions than with the DataMapper you don't have so many options, basically the only thing you can do is to convert all the records in another format, for example a groovy map, and then perform your search there by using a small piece of groovy or java.
In alternative you can always go to XML and use XPATH expression.
Hope this helps.
Related
In case I want to change the text or add an element in XML files, I can just directly convert the file to a string, replace or add elements as a string, then convert it back to XML.
In what use case where that approach is bad? Why do we need to manipulate it using libraries such as XMLdom, Xpath?
The disadvantage of manipulating XML via string operators is that achieving a parsing-dependent goal for even one particular XML document is already harder than using a proven XML parser. Achieving the goal for equivalent XML document variations will be nearly impossible, especially for anyone naive enough to be considering such an approach in the first place.
Not convinced?
Scan the table of contents of the Extensible Markup Language (XML) 1.0 (Fifth Edition), W3C Recommendation 26 November 2008. If you do not understand everything, your hand-written, poor imitation of an XML parser, will fail, if not on your first test case, on future variations which you're obligated to handle if you wish to claim your code works with XML. To mention just a few challenges, your program should
Report if its input XML is not well-formed.
Handle character and entity references.
Handle comments and CDATA sections.
Tempted to parse XML via string operators, including regex? Don't do it.
Use a real XML parser.
I am trying to create nested tables in my Asciidoctor pdf output but I cannot find the syntax.
If I understand it right, nested tables should be supported in Asciidoctor as of 1.5.0. I am running a Docker container that has 1.5.5 (https://github.com/asciidoctor/docker-asciidoctor).
I've tried as per example in table 11 here: http://www.methods.co.nz/asciidoc/newtables.html but to no avail.
Note that Asciidoc and Asciidoctor are not the same thing.
Therefore, make sure you are looking at the correct documentation.
I have not tried it, but if a nested table is going to work, the cell containing it will have to use the asciidoc style. You will then most likely have put the table in a block and escape all the pipe symbols (using \| instead of | or using some other delimiter).
A web search turned up this open issue in the AsciiDoctor tracker requesting (improvements to) nested table support. So this seems not to be implemented yet at least in some backends. The first comment contains an example of how to specify a nested table.
Are you sure you cannot use something other than nested tables? They are usually not the most readable thing.
In order to make it work, you need to delete two unintended newlines. Here's the modified content.
[width="75%",cols="1,2a"]
|==============================================
|Normal cell |Cell with nested table
[cols="2,1"]
!==============================================
!Nested table cell 1 !Nested table cell 2
!==============================================
|==============================================
I must say I used asciidoctor-pdf first time and although the process has been streamlined as much as possible with the docker image, there is a much quicker way to get rendered feedback: Asciidoctor.js - a Chrome extension that converts your .adoc file to HTML and reloads when you save the file.
Asciidoctor.js comes from the same great team that created and maintain Asciidoctor, so it has latest Asciidoctor under the hood.
We are using SonarQube 4.5.1 for our projects and are planning to provide list of rules activation/deactivation to end users.
What is best way to export/import within SonarQube in Excel?
There is option of backup in Quality Profile but it did not export description.
I looked directly in the database with rules table, but due to some HTML tag this is not working for delimited with semicolon.
I would also like to know how we can add customized rules to existing set of rules. What is the procedure?
The SonarQube interface is really going to be the best referential for your users. Based on the info in your comment, I'd suggest a simple web form rather than trying to construct a spreadsheet.
It may help to know that you can construct the URL to any rule using the repositoryKey and key returned in the XML profile backup:
http://[server]/coding_rules#rule_key=[repositoryKey]:[key]
E.G. https://sonarcloud.io/api/rules/search?rule_key=csharpsquid%3AS907
The API supports many parameters that are documented here: https://sonarcloud.io/web_api/api/rules/search (click the Parameters header above the horizontal line to open the descriptions).
For example, the languages parameter makes it possible to search for rules that apply to one or more languages (a comma-separated list). To get the list of all C# rules, you can use https://sonarcloud.io/api/rules/search?languages=cs
To export the Rules on JSON format:
For C++ rules you can use the URL:
http://<localhost:<port/>>api/rules/search?languages=c%2B%2B
For C rules you can use the URL:
http://<localhost:<port/>>api/rules/search?languages=cs
After saving result of search API in json file, to cover entirely the question, import of json result in excel can be done with https://github.com/VBA-tools/VBA-JSON
The XML returned from direct REST calls to Connections 4.0 returns dates like so, from a File:
<published>2013-08-06T15:00:08.390Z</published>
<updated>2013-08-15T15:30:20.367Z</updated>
<td:created>2013-08-06T15:00:08.390Z</td:created>
<td:modified>2013-08-15T13:16:59.151Z</td:modified>
<td:lastAccessed></td:lastAccessed>
and from a File Comment:
<published>2013-08-08T18:04:44.949Z</published>
<updated>2013-08-08T18:05:39.566Z</updated>
<td:modified xmlns:td="urn:ibm.com/td">2013-08-08T18:05:39.566Z</td:modified>
<td:created xmlns:td="urn:ibm.com/td">2013-08-08T18:04:44.949Z</td:created>
The API documentation is vague about the conditions under which these dates are set:
<td:created> Creation timestamp in Atom format.
<td:modified> The date that the comment was last updated. Timestamp in Atom format.
<updated> The date that the comment was last updated, as defined in the Atom specification.
<published> The date the comment was initially published, as defined in the Atom specification.
Can one assume that <published> == <td:created> and that <updated> == <td:modified>, as the data seems to indicate, or are there circumstances under which these dates would have different values? Does the answer to this question vary by application (Files, Blogs, etc.)?
Edit
<updated> and <published> are Atom-defined properties. The <td:...> ones are IBM's extensions.
Another way to ask my question might be, What descriptions or definitions would I use to explain each of these dates to a user?
Whilst td:created and published are generally identical, with the foremost exception of content created as a draft and later published, applications use td:modified and updated with slightly different semantics. In Wikis for instance updated reflects the time page contents or metadata last changed, while td:modified is only updated when page contents i.e. title or text are updated. I expect the API documentation to clarify these subtle details, if not please post comments and ask for improvements.
Is there a quick way to find out all the mandatory field in a xsd file?
I need to quickly see all the mandatory fields in the schema
thanks
Not sure if you're looking to do this through code. If not, Altova XMLSpy, for example, provides an option to "Generate Sample XML File" - with options to generate only mandatory fields.
Otherwise, if you're working with Java, for example, you can use something like the Eclipse XSD project for programmatic access to the XSD. (It even works without Eclipse.) Some additional details at Are there any other frameworks that parse XSD other than XSOM? .
Take a look at this post; instead of exporting all fields, there's also an option to get only the mandatory ones... One significant difference compared with the answer you accepted is in that you can also generate an Excel or CSV file, in addition to the XML file; not to mention that the sample XML approach is deficient by definition... I would pay attention to the way mandatory choices, abstract typed elements or abstract elements with substitution groups play in your case.