Lua XML extract from pattern - string

An application is sending my script an Stream like this one:
<?xml version="1.0" encoding="UTF-8"?>
<root>
<aRootChildNode>
<anotherChildNode>
<?xml version="1.0">
<TheNodeImLookingFor>
... content ...
</TheNodeImLookingFor>
</anotherChildNode>
</aRootChildNode>
</root>
I want to extract the TheNodeImLookingFor section.
So far, got:
data = string.match(Stream, "^.+\<TheNodeImLookingFor\>.+\<\/TheNodeImLookingFor\>.+$")
Pattern is recognized in the Stream, but it doesn't extract the node and its content.

In general, it's not a good idea to use pattern matching (either Lua pattern or regex) to extract XML. Use a XML parser.
For this problem, you don't need to escape \ or <(even if you do, Lua pattern uses % to escape magic characters). And use brackets to get the node and its content:
data = string.match(Stream, "^.+(<TheNodeImLookingFor>.+</TheNodeImLookingFor>).+$")
Or to get only the content:
data = string.match(Stream, "^.+<TheNodeImLookingFor>(.+)</TheNodeImLookingFor>.+$")

Related

XSLT 3 : convert xml to json

When I am trying to convert XML to JSON using XSLT3
<xsl:copy-of select="xml-to-json($finalOutPut, map { 'indent' : false() })"/>
I get below error :
net.sf.saxon.s9api.SaxonApiException: xml-to-json: element found in wrong namespace: Q{}wrapper
Basically i am converting one xml to another xml , renaming certain field.
Passing this xml as a input to xml-to-json() .
Any suggestions?
The XML format that xml-to-json consumes is specified both in the XSLT 3.0 specification (https://www.w3.org/TR/xslt-30/#json-to-xml-mapping) as well as in the XPath and XQuery 3.1 function specification: https://www.w3.org/TR/xpath-functions/#json.
Basically all elements need to be in the namespace http://www.w3.org/2005/xpath-functions and are map, array, string, boolean, number etc., to reflect the JSON datatypes.
The error message suggests your input contains an element named wrapper in no namespace, so that is certainly not the right format for that function. You will need to use additional transformation steps to transform your XML to the one the function expects.

How to build text from mixed xml content using Python?

I have a situation in which an XML document has information in varying depth (according to S1000D schemas), and I'm looking for a generic method to extract correct sentences.
I need to interpret a simple element containing text as one individual part/sentence, and when an element that's containing text contains other elements that in turn contain text, I need to flatten/concatenate it into one string/sentence. The nested elements shall not be visited again if this is done.
Using Pythons lxml library and applying the tostring function works ok if the source XML is pretty-printed, so that I may split the concatenated string into new lines in order to get each sentence. If the source isn't pretty-printed, in one single line, there won't be any newlines to make the split.
I have tried the iter function and applying xpaths to each node, but this often renders other results in Python than what I get when applying the xpath in XMLSpy.
I have started down some of the following paths, and my question is if you have some input on which ones to continue on, or if you have other solutions.
I think I could use XSLT to preprocess the XML file, and then use a simpler Python script to divide the content into a list of sentence for further processing. Using Saxon with Python is now doable, but here I run into problems if the XML source contains entities that I cannot redirect Saxon to resolve (such as & nbsp;). I have no problem parsing files with lxml, so I tend to lean towards a cleaner Python solution.
lxml doesn't seem to have xpath support that can give me all nodes with text that contains one or more children containing text, and all nodes that are simple elements with no parents containing text nodes. Is there way to preprocess the parsed tree so that I can ensure it is pretty printed in memory, so that tostring works the same way for every XML file? Otherwise, my logic gives me one string for a document with no white space, and multiple sentences/strings if the source had been pretty printed. This doesn't feel ok.
What are my options? Use XSLT 1.0 in Python, other parsers to get a better handle on where I am in the tree, ...
Just to reiterate the issue here; I am looking for a generic way to extract text, and the only rules to the XML source are that a sentence may be built from an element with child elements with text, but there won't be additional levels. The other possibility is the simple element, but this one cannot be included in a parent element with text since this is included in the first rule.
Help/thoughts are appreciated.
This is a downright ugly code, a hastily hack with no real thought on form, beauty or finesse. All I am after is one way of doing this in Python. I'll tidy things up when I find a good solution that I want to keep. This is one possible solution so I figured I'd post it to see if someone can be kind enough to show me how to do this instead.
The problems has been to have xpath expressions that could get me all elements with text content, and then to act upon the depending on their context. All my xpath expressions has given me the correct nodes, but also a root, or ancestor that has pulled a more or less complete string at the beginning, so I gave up on those. My xpath functions as they should in XSLT, but not in Python - don't know why...
I had to revert to regex to find nodes that contains strings that are not white space only.
Using lxml with xpath and tostring gives different results depending on how the source XML is formatted, so I had to get around that.
The following formats have been tested:
<?xml version="1.0" encoding="UTF-8"?>
<root>
<subroot>
<a>Intro, element a: <b>Nested b to be included in a, <c>and yet another nested c-element</c> and back to b.</b></a>
<!-- Comment -->
<a>Simple element.</a>
<a>Text with<b> 1st nested b</b>, back in a, <b>and yet another b-element</b>, before ending in a.</a>
</subroot>
</root>
<?xml version="1.0" encoding="UTF-8"?>
<root>
<subroot>
<a>Intro, element a: <b>Nested b to be included in a, <c>and yet another nested c-element,
</c> and back to b.</b>
</a>
<!-- Comment -->
<a>Simple element.</a>
<a>Text with<b> 1st nested b</b>, back in a, <b>and yet another b-element</b>, before ending in a.</a>
</subroot>
</root>
<?xml version="1.0" encoding="UTF-8"?><root><subroot><a>Intro, element a: <b>Nested b to be included in a, <c>and yet another nested c-element</c> and back to b.</b></a><!-- Comment --><a>Simple element.</a><a>Text with<b> 1st nested b</b>, back in a, <b>and yet another b-element</b>, before ending in a.</a></subroot></root>
Python code:
dmParser=ET.XMLParser(resolve_entities=False, recover=True)
xml_doc = r'C:/Temp/xml-testdoc.xml'
parsed = ET.parse(xml_doc)
for elem in parsed.xpath("//*[re:match(text(), '\S')]", namespaces={"re": "http://exslt.org/regular-expressions"}):
tmp = elem.xpath("parent::*[re:match(text(), '\S')]", namespaces={"re": "http://exslt.org/regular-expressions"})
if(tmp and tmp[0].text and tmp[0].text.strip()): #Two first checks can yield None, and if there is something check if only white space
continue #If so, discard this node
elif(elem.xpath("./*[re:match(text(), '\S')]", namespaces={"re": "http://exslt.org/regular-expressions"})): #If a child node also contains text
line =re.sub(r'\s+', ' ',ET.tostring(elem, encoding='unicode', method='text').strip()) #Replace all non wanted whitespace
if(line):
print(line)
else: #Simple element
print(elem.text.strip())
Always yields:
Intro, element a: Nested b to be included in a, and yet another nested c-element, and back to b.
Simple element.
Text with 1st nested b, back in a, and yet another b-element, before ending in a.

base 64 Decode XML values using Groovy script

I will be receiving the following XML data in a variable.
<order>
<name>xyz</name>
<city>abc</city>
<string>aGVsbG8gd29ybGQgMQ==</string>
<string>aGVsbG8gd29ybGQgMg==</string>
<string>aGVsbG8gd29ybGQgMw==</string>
</order>
Output:
<order>
<name>xyz</name>
<city>abc</city>
<string>hello world 1</string>
<string>hello world 2</string>
<string>hello world 3</string>
</order>
I know how I can decode from base64 but the problem is some of the values are decoded already and some are encoded. What is the best approach to decode this data using groovy so that I get the output as shown?
Always: tag value will be encoded. rest all other tags and value will be decoded.
Since there's no uncertainty on which nodes could come encoded and which not, hence no need to detect base64 encoding, the way to do it is pretty simple:
Parse it. There's two preferable ways to do that in Groovy: XmlSlurper & XmlParser. They differ in computation & mem consumption modes, both provide object/structure representation in the end, though.
Work with that object structure: traverse all required elements, decode the content/attributes you need to decode.
Either proceed further with the data with them and/or serialize it back to the XML text.
Articles to look at:
Load, modify, and write an XML document in Groovy
https://www.baeldung.com/groovy-xml
https://groovy-lang.org/processing-xml.html
and many, many more.
Another cheat sheet always useful for Groovy noobs: http://groovy-lang.org/groovy-dev-kit.html
Check out how to traverse the structures there, for instance.

Trying to get a variable out of Sax JS

I'm using SAX JS to parse an XML file in Node. I want it to produce an object of the parsed file, but the best I seem to be able to do is console.log my parsed data.
I'm really new to streams in Node. I've googled and tried some things, but my fundamental problems seems to be that I can't get a grasp on where to begin with streams and how they relate to SAX JS.
How do I output the parsed XML file from SAX to a JS object?
Addendum
Ideally I'd like a JS object in a variable, but I'd also be happy getting JSON text out, which I could deserialize into a variable.
With SAX JS, I tried this.write(JSON.stringify(val)); from the closetag event handler and it produces countless error! Error: Invalid characters in closing tag. I really have no idea what I'm doing here.
I've already tried xml2js (didn't do what I need), and xml4js (not maintained). The big problem I had with xml2js is that my xml file's text includes essential data in self-closing tags that ended up in a different key, completely separate from the text.
Here's an XML structure somewhat like what I need it to handle:
<p>The quick brown fox <del>jumps</del>
over the <lb n="15"/> lazy dog.</p>
I need all the text, and I need some what to insert the attribute of the lb tag into the text with a custom format.
Addendum 2
Here's a better example, along with an ideal result:
<p>The quick brown fox <del>jumps</del>
over the <lb n="15"/> lazy
<note type="marginal">325a</note> dog.</p>
Result:
The quick brown fox jumps over the [line 15] lazy [B:325a] dog.
From the sax npm package description we can see:
You can use it to build an object model out of XML, but it doesn't do
that out of the box.
Perhaps, you might want to rethink your choice and take a look at one of the available alternatives unless you really need streams if XML file is huge and doesn't fit into machine memory.
As an example, here is how we can construct on object representation of an xml file using fast-xml-parser:
const parser = require('fast-xml-parser');
const data = `<?xml version="1.0" encoding="UTF-8"?>
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend! <pb n="1"/> And have a plenty of sleep!</body>
</note>`;
const xmlObj = parser.parse(data, {
ignoreAttributes: false,
allowBooleanAttributes: true,
parseNodeValue: true,
parseAttributeValue: true
});
console.log('XML object: ', JSON.stringify(xmlObj));
The output will be:
XML object: {"note":{"to":"Tove","from":"Jani","heading":"Reminder","body":{"#text":"Don't forget me this weekend2!And have a plenty of sleep!","pb":{"#_n":1}}}}
I've prepared a working demo on Repl.it.
If a file is big enough but fits into memory, you might want to spin a child process to offload the main thread.

Removing html tags from a string in R

I'm trying to read web page source into R and process it as strings. I'm trying to take the paragraphs out and remove the html tags from the paragraph text. I'm running into the following problem:
I tried implementing a function to remove the html tags:
cleanFun=function(fullStr)
{
#find location of tags and citations
tagLoc=cbind(str_locate_all(fullStr,"<")[[1]][,2],str_locate_all(fullStr,">")[[1]][,1]);
#create storage for tag strings
tagStrings=list()
#extract and store tag strings
for(i in 1:dim(tagLoc)[1])
{
tagStrings[i]=substr(fullStr,tagLoc[i,1],tagLoc[i,2]);
}
#remove tag strings from paragraph
newStr=fullStr
for(i in 1:length(tagStrings))
{
newStr=str_replace_all(newStr,tagStrings[[i]][1],"")
}
return(newStr)
};
This works for some tags but not all tags, an example where this fails is following string:
test="junk junk<a href=\"/wiki/abstraction_(mathematics)\" title=\"abstraction (mathematics)\"> junk junk"
The goal would be to obtain:
cleanFun(test)="junk junk junk junk"
However, this doesn't seem to work. I thought it might be something to do with string length or escape characters, but I couldn't find a solution involving those.
This can be achieved simply through regular expressions and the grep family:
cleanFun <- function(htmlString) {
return(gsub("<.*?>", "", htmlString))
}
This will also work with multiple html tags in the same string!
This finds any instances of the pattern <.*?> in the htmlString and replaces it with the empty string "". The ? in .*? makes it non greedy, so if you have multiple tags (e.g., <a> junk </a>) it will match <a> and </a> instead of the whole string.
You can also do this with two functions in the rvest package:
library(rvest)
strip_html <- function(s) {
html_text(read_html(s))
}
Example output:
> strip_html("junk junk<a href=\"/wiki/abstraction_(mathematics)\" title=\"abstraction (mathematics)\"> junk junk")
[1] "junk junk junk junk"
Note that you should not use regexes to parse HTML.
Another approach, using tm.plugin.webmining, which uses XML internally.
> library(tm.plugin.webmining)
> extractHTMLStrip("junk junk<a href=\"/wiki/abstraction_(mathematics)\" title=\"abstraction (mathematics)\"> junk junk")
[1] "junk junk junk junk"
An approach using the qdap package:
library(qdap)
bracketX(test, "angle")
## > bracketX(test, "angle")
## [1] "junk junk junk junk"
It is best not to parse html using regular expressions. RegEx match open tags except XHTML self-contained tags
Use a package like XML. Source the html code in parse it using for example htmlParse and use xpaths to find the quantities relevant to you.
UPDATE:
To answer the OP's question
require(XML)
xData <- htmlParse('yourfile.html')
xpathSApply(xData, 'appropriate xpath', xmlValue)
It may be easier with sub or gsub ?
> test <- "junk junk<a href=\"/wiki/abstraction_(mathematics)\" title=\"abstraction (mathematics)\"> junk junk"
> gsub(pattern = "<.*>", replacement = "", x = test)
[1] "junk junk junk junk"
First, your subject line is misleading; there are no backslashes in the string you posted. You've fallen victim to one of the classic blunders: not as bad as getting involved in a land war in Asia, but notable all the same. You're mistaking R's use of \ to denote escaped characters for literal backslashes. In this case, \" means the double quote mark, not the two literal characters \ and ". You can use cat to see what the string would actually look like if escaped characters were treated literally.
Second, you're using regular expressions to parse HTML. (They don't appear in your code, but they are used under the hood in str_locate_all and str_replace_all.) This is another of the classic blunders; see here for more exposition.
Third, you should have mentioned in your post that you're using the stringr package, but this is only a minor blunder by comparison.

Resources