How to load a text file from within an XQuery? - text

Is there an XQuery command to load a text file?
I can load an xml document by doing the following;
declare variable $text := doc("test.xml");
But it only seems to work if test.xml is a well-formed xml document. What I want is to load a plain test.txt file into a string variable. Something like this;
declare variable $str as xs:string := fn:loadfile("test.txt");
Can it be done?
I'm using the Saxon engine but I can't find an answer in the saxon documentation.

Indeed you can find one implementation of the file functions in Zorba: http://www.zorba-xquery.com/doc/zorba-1.4.0/zorba/xqdoc/xhtml/www.zorba-xquery.com_modules_file.html

XQuery 3.0 has the function fn:unparsed-text (which was originally defined in XSLT), which does exactly what you want. XQuery 3.0 is still a work in progress though, but whilst there are not many XQuery 3.0 processors available, many XQuery processors already support this function (including Saxon).

There is a standardization effort for this on EXPath. A spec already exists for an XQuery File module that is capable of doing what you describe: EXPath File Module Spec.
Yet, I don't know how many implementations are out there. Saxon doesn't seem to implement it unfortunately (Or, please point me to it). An example implementation is shipped with zorba (see XQDoc Site of Zorba). If you want to know how to get started with zorba, you can check out this tutorial: Get Started with XQuery and Zorba.

XQuery by default( means fn: namespace ) doesn;t have any file-access methods.
MarkLogic :
xdmp:filesystem-file()
xdmp:filesystem-directory()
Zorba:
already mentioned by user457056
Exist
Exist File Module

Saxon since version 9.2 has an extension of fn:collection that can be used to read unparsed text. Here is an example:
collection('file:///c:/TEMP?select=text.txt;unparsed=yes')
This is described under "Changes in this Release" for 9.2. Apparently it is not mentioned in the function library documentation. However it works well and I have been using it a lot.

Related

Import schema from XSD to OpenAPI/swagger YAML

I have a schema definition in a XSD file which is provided by ISO20022. This schema will need to be used in a swagger/openAPI definition (in yaml format). Since the XSD file has about 1000 lines, manual work is impracticable. This old thread mention some solution, but it is not straightforward.
Does anyone know any tool which provides an easy way to import the schema definitions from a XSD file into a swagger/openAPI yaml file?
You could try xsd2json from the npm module jgexml. It was written to do precisely this for a large API specified in XSD.
I could not escape from manual work in this task. What I've done was using "xsd2json" to convert the XSD schema to JSON. Then, I used the website www.json2yaml.com to get it as YAML. Afterwards, I created a swagger file myself, then merged the YAML file into it.
Thanks for your responses!

Read and operate on a TS file using node js

I am creating a npm library where I need to read the files of the folder from where my library function were invoked from command line and then operate on those files.
By operation I mean to check if a variable exist, if a function exists, modifying variable, function,etc.
The files will be a Typescript files.
Any help on how to proceed will be great.
Seems like you need some kind of AST parser like Esprima or babel-parser. These tools can parse the content of JS/TS files, build the abstract syntax tree that can be traversed, modified and converted back to the source code.
There's a lot of useful tools available in Babel toolset that simplifies these operations. For example, babel-traverse simplifies searching the target statement or expression, babel-types that helps to match the type of the AST nodes and babel-generator that generates the source code from the AST.
It's going to be very difficult to get these answers without running the files.
So the best approach is probably to just import the files as usual and see what side-effects running the files had. For example, you can check if a file exported anything.
If this doesn't solve your problem, you will have to parse the files. The best way to do that might be to use the typescript compiler itself:
https://github.com/microsoft/TypeScript/wiki/Using-the-Compiler-API

How to use helm-semantic-or-imenu for code navigation with type annotated python code

I would like to use the helm-semantic-or-imenu command to navigate components of type annotated Python code, but whatever code analyzer is used to dentify the components doesn't seem to recognize the type annotated python code. Functions with the return type annotation doesn't get recognized at all and functions with annotated arguments show the type instead of the arguments names in the signatures
The main problem I have is that I do not properly understand the components that is involved in making this work (when it does work). Obviously it might help to somehow update the code analyzer, but in which project do I find that? helm? semantic? imenu? or as someone mentioned somewhere else with regards to code analysis python.el? I could really use some help getting started to solve this. If the code analyzer is found in python.el, can I then try to modify and make emacs use a local version preferentially over the installed one?
EDIT:
After making the initial post I finally made a break through in trying to figure out where the components come from. I searched for python*.el in all of the file systemsystem and discovered these:
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python.elc
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python-wy.elc
I found the source for emacs 26.2 and discovered that indeed it seems python-el is responsible for parsing python files for semantic. It also internally uses the python-wy for recognizing a large portion of the language components. But unfortunately that is where I hit a brick wall. I was hoping to be able to monkey patch the function that recognizes a function definition via an re or something, but semantic actually solves the problem the right way. So python-wy seems to be auto-generated from a formal grammar definition file (in emacs git admin/grammars/python.wy) and figuring out how to modify that it is unfortunately much beyond my abilities.
The semantic python backend doesn't appear to parse type annotations correctly (and there hasn't been much recent development on those libraries as far as I can tell). Since helm-semantic-or-imenu favors semantic when it is active, you can disable semantic altogether for python buffers unless you use its other features (personally I only use it for C/C++).
When the semantic mode-specific libraries are loaded they set imenu-create-default-create-index and imenu-default-goto-function, causing imenu to use semantic instead of python.el's imenu function.
To disable semantic support for your python files you can customize the semantic-new-buffer-setup-functions, only adding entries for modes you want semantic support for, eg. in your semantic hook (or alternatively with the customize UI),
(setq semantic-new-buffer-setup-functions
'((c-mode . semantic-default-c-setup)
(c++-mode . semantic-default-c-setup)
(srecode-template-mode . srecode-template-setup-parser)
(texinfo-mode . semantic-default-texi-setup)
;; etc.
;; (makefile-automake-mode . semantic-default-make-setup)
;; (makefile-mode . semantic-default-make-setup)
;; (makefile-gmake-mode . semantic-default-make-setup)
))

Give an example of: groovyc --sourcepath

I am unable to get the --sourcepath option of groovyc to work at all. Can someone furnish a trivial example of it actually doing anything?
Ultimately I want to use "groovyc" at the command line with a directory a packaged organized tree of mixed groovy and java source. I don't want to reference each source file explicitly. And I don't want to use an ant or maven task either, on grounds of both principle (hey is there a bug here?) and because the production scenario that I might want to tweak the source in has neither but will have groovy. I know I could use unix find but must I resort to that?!
sourcepath isn't used anymore. It's only there for backwards compatibility and will be removed in the future.
The Groovy documentation is currently rewritten, you can find a snapshot including the documentation for groovyc here: https://dl.dropboxusercontent.com/u/20288797/groovy-documentation/index.html#ThegroovycAntTask-groovyc

Importing csv files using groovy

I have developed a groovy application. Now it has been required
that for feeding the DB a CSV interface must be provided.
That is, I have to load a csv file, parse it and
insert records into the DB in a transactional way.
The question is if there exists for groovy something
like ostermiller utils (a configurable csv parser).
Thanks in advance,
Luis
Kelly Robinson just wrote a nice blog post about the different possibilities that are available to work with CSV files in Groovy.
Groovy and Java are interoperable. Take a look at the documentation for mixed Java and Groovy applications. Any Java class can easily be used in Groovy with no change (plus you have the groovy syntax). If you are interested in the ostermiller utils to do your CSV parsing, you can use it directly from Groovy.
If the ostermiller library does what you want you can call it directly from Groovy. Just put the necessary jars in your groovy\lib directory and you should be ready to go.

Resources