Samples and tips on using GroovyResourceLoader - groovy

I am attempting to install my own GroovyResourceLoader and was wondering if there was an authorative guide somewhere describing all hte moving bits.
Ive noticed when groovy attempts to compile a script, it does attempt to find types by sending paths to the GRL. However it does some strange things sometimes it uses '$' as a separator and other times it uses plain old '.'.
Heres a snapshot of some logging of an attempt to load something. Ignoring the auto import stuff, notice how it using '$' as the package separator and then by replacing each '$' it one at a time with a '.'.
-->a$b$groovy$X$Something
-->a.b$groovy$X$Something
-->a.b.groovy$X$Something
Im using Groovy 1.8.0.

The "$" you see come from Groovy trying to match inner classes. I strongly assume you have somewhere an "a.b.groovy.X.Something" which will lead groovy to try to discover all kinds of inner class combinations for this one. You could for example have a "a$b$groovy$X$Something.groovy" file.

Related

Passing filenames to python.subprocess across different platforms

I have a (cross-platform) script which invokes Excel/LibreOffice to open a file. Under Linux I don't have any problem, but with Windows there are problems when spaces are in the path of the file. This is a known issue, I've read quite some posts, but none seems to answer the specifics. Here is what I do:
subprocess.run([self.settings.excel, path])
The problem I end up with is the following: Excel complains that the file Your%20Preferred%20Path%20With%20Spaces cannot be found (obviously) although the parameter I give to subprocess isn't url-encoded like this.
So: who is garbaging my string ? Is it Excel, subprocess…?
I have found a workaround, using os.startfile, but it is not cross-platform. While having to execute different code according to the underlying platform may work as a patch, caring about so low-level details is hardly pythonic.
Therefore: what should I do to have a cross-platform, standard code solving this issue (and avoiding big libs like webbrowser if at all possible)? How could I build path so that it is passed to Excel in the expected format? 

Reading dot files with graphviz package in haskell

For a program i have to reformat a dot file in Haskell to adjust the layout of its tree/content. I decided to use https://hackage.haskell.org/package/graphviz-2999.20.1.0/docs/Data-GraphViz.html , but i cant find simple tutorials and the documentation confuses me, namely:
1 . It contains a function: dotToGraph, which reads: "A pseudo-inverse to graphToDot; "pseudo" in the sense that the original node and edge labels aren't able to be reconstructed."
I do need to retain the names of the nodes and edges, which i suppose are the labels mentioned here? but it seems weird that it cannot do this. It also does not take in a file name or string for the dot text, so i assume this is the wrong function.
2 . There also seems to be a parseDot function/class that comes with the package, but i cant find clear documentation for its use. (see: https://hackage.haskell.org/package/graphviz-2999.20.1.0/docs/Data-GraphViz-Parsing.html#t:ParseDot)
Can you recommend me a simple overview/summary/online example for doing this? Should i be using a different function? I have only recently started using Haskell, so the package documentation is sometimes difficult to decipher for me.
It would help a great deal, thanks!

How to use helm-semantic-or-imenu for code navigation with type annotated python code

I would like to use the helm-semantic-or-imenu command to navigate components of type annotated Python code, but whatever code analyzer is used to dentify the components doesn't seem to recognize the type annotated python code. Functions with the return type annotation doesn't get recognized at all and functions with annotated arguments show the type instead of the arguments names in the signatures
The main problem I have is that I do not properly understand the components that is involved in making this work (when it does work). Obviously it might help to somehow update the code analyzer, but in which project do I find that? helm? semantic? imenu? or as someone mentioned somewhere else with regards to code analysis python.el? I could really use some help getting started to solve this. If the code analyzer is found in python.el, can I then try to modify and make emacs use a local version preferentially over the installed one?
EDIT:
After making the initial post I finally made a break through in trying to figure out where the components come from. I searched for python*.el in all of the file systemsystem and discovered these:
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python.elc
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python-wy.elc
I found the source for emacs 26.2 and discovered that indeed it seems python-el is responsible for parsing python files for semantic. It also internally uses the python-wy for recognizing a large portion of the language components. But unfortunately that is where I hit a brick wall. I was hoping to be able to monkey patch the function that recognizes a function definition via an re or something, but semantic actually solves the problem the right way. So python-wy seems to be auto-generated from a formal grammar definition file (in emacs git admin/grammars/python.wy) and figuring out how to modify that it is unfortunately much beyond my abilities.
The semantic python backend doesn't appear to parse type annotations correctly (and there hasn't been much recent development on those libraries as far as I can tell). Since helm-semantic-or-imenu favors semantic when it is active, you can disable semantic altogether for python buffers unless you use its other features (personally I only use it for C/C++).
When the semantic mode-specific libraries are loaded they set imenu-create-default-create-index and imenu-default-goto-function, causing imenu to use semantic instead of python.el's imenu function.
To disable semantic support for your python files you can customize the semantic-new-buffer-setup-functions, only adding entries for modes you want semantic support for, eg. in your semantic hook (or alternatively with the customize UI),
(setq semantic-new-buffer-setup-functions
'((c-mode . semantic-default-c-setup)
(c++-mode . semantic-default-c-setup)
(srecode-template-mode . srecode-template-setup-parser)
(texinfo-mode . semantic-default-texi-setup)
;; etc.
;; (makefile-automake-mode . semantic-default-make-setup)
;; (makefile-mode . semantic-default-make-setup)
;; (makefile-gmake-mode . semantic-default-make-setup)
))

How to prevent dead-code removal of utility libraries in Haxe?

I've been tasked with creating conformance tests of user input, the task if fairly tricky and we need very high levels of reliability. The server runs on PHP, the client runs on JS, and I thought Haxe might reduce duplicative work.
However, I'm having trouble with deadcode removal. Since I am just creating helper functions (utilObject.isMeaningOfLife(42)) I don't have a main program that calls each one. I tried adding #:keep: to a utility class, but it was cut out anyway.
I tried to specify that utility class through the -main switch, but I had to add a dummy main() method and this doesn't scale beyond that single class.
You can force the inclusion of all the files defined in a given package and its sub packages to be included in the build using a compiler argument.
haxe --macro include('my.package') ..etc
This is a shortcut to the macro.Compiler.include function.
As you can see the signature of this function allows you to do it recursive and also exclude packages.
static include (pack:String, rec:Bool = true, ?ignore:Array<String>, ?classPaths:Array<String>):Void
I think you don't have to use #:keep in that case for each library class.
I'm not sure if this is what you are looking for, I hope it helps.
Otherwise this could be helpful checks:
Is it bad that the code is cut away if you don't use it?
It could also be the case some code is inlined in the final output?
Compile your code using the compiler flag -dce std as mentioned in comments.
If you use the static analyzer, don't use it.
Add #:keep and reference the class+function somewhere.
Otherwise provide minimal setup if you can reproduce.

thrift js:node - Cannot use reserved language keyword: "not"

When converting a thrift object for nodejs:
thrift -r --gen js:node state_service.thrift
The following error is thrown:
[ERROR: /state_service.thrift:33] (last token was 'not') Cannot use
reserved language keyword: "not"
The lines in the code around 33 are:
typedef bool Not
struct Exp {
1: string left
2: Not not
3: BinaryOp op
4: AnyValue right
}
I am using the most recent Thrift version 0.9.2
Try to change the not to something else, i think the problem is that 'not' may have another meaning in the language that you choose to generate.
2: Not not
The solution is to not use the reserved language keyword, as advised by the Thrift compiler. These keywords are reserved for a reason. Thrift is a cross-language tool, and not is indeed a keyword in some of them.
I don't want to change the processing code only because of a faulty js converter.
I beg to differ. Faulty is something that does not work, altough it should. Thrift clearly tells you that what you are about to try is illegal (as of today) and what the problem is.
To put it another way: With Linux, you can put uppercase and lowercase letters in a file name (actually you can put a whole bunch of strange things in, but I'll make it easy). So creating a FILE and a file in the same folder will work perfectly. If you now take your program and run it on Windows, relying on that behaviour, you will sooner or later run into trouble and may start to complain you "dont want to change your processing code only because of that faulty OS".
Note that complaining will not help you out of the pothole, altough the endorphines emitted during that process will make sure you have a fun time. The solution is of course to wait until Microsoft fixed their faulty OS, because you make the rules. Correct?
Of course not. So if you feel the implementation is wrong - fine! This is Open Source, and nobody claimed perfection. You are free to provide a patch, and we will happily review it. But please make sure you tested it with all 20+ languages currently supported by Thrift.

Resources