SOOT - CompilationDeathException (and a phantom army) - soot

here is the command I use on WIndows 10 command shell :
java
-cp .\soot-2.5.0.jar soot.Main
-cp ".;R:\...\OCLRuler\lib;C:\...\jdk1.8.0_144\bin"
-pp -process-dir R:\...\OCLRuler\src\
-src-prec java
-d R:\...\test\soot
-allow-phantom-refs
-main-class OCLRuler
When I execute it, I get the following output:
Soot started on Tue Sep 26 13:28:32 EDT 2017
Warning: java.dyn.InvokeDynamic is a phantom class!
Warning: Main is a phantom class!
Warning: MainMulti is a phantom class!
Warning: oclruler.a_test.MainRawTesting is a phantom class!
... all of them (100+ lines)...
Warning: oclruler.utils.ToolBox is a phantom class!
OCLRuler.java: Class "oclruler.genetics.EvaluatorOCL" not found.
OCLRuler.java: Class "oclruler.genetics.EvaluatorOCL" not found.
Exception in thread "main" soot.CompilationDeathException: Could not compile
at soot.javaToJimple.JavaToJimple.compile(JavaToJimple.java:104)
at soot.javaToJimple.InitialResolver.formAst(InitialResolver.java:117)
at soot.JavaClassSource.resolve(JavaClassSource.java:54)
at soot.SootResolver.bringToHierarchy(SootResolver.java:215)
at soot.SootResolver.bringToSignatures(SootResolver.java:239)
at soot.SootResolver.processResolveWorklist(SootResolver.java:154)
at soot.SootResolver.resolveClass(SootResolver.java:124)
at soot.Scene.loadClass(Scene.java:448)
at soot.Scene.loadClassAndSupport(Scene.java:433)
at soot.Scene.loadNecessaryClasses(Scene.java:1076)
at soot.Main.run(Main.java:167)
at soot.Main.main(Main.java:141)
All libs used in the OCLRUler project are included in OCLRUler/lib and all sources in OCLRuler/src. The output directory is not included in the soot directory. Also, the project does compile (I'm working on and with it). The . directory contains all soot/jasmin/heros jars.
Still, all classes are considered phantoms, and Soot compilation abords because (I guess) it lacks bodies from these classes. I mean that "EvaluatorOCL" (i.e., still guessing, the source of the CompilationDeathException) is a "Phantom class".
What's wrong ?
SHould I add all and every packages in Soot's classpath ?
I've tried all sorts of command line expressions - until I got profundly lost. Anybody has a clue on the matter ?
Thanks a lot.
Edouard

Changed the -pp -process-dir R:\...\OCLRuler\src\ argument for -pp -process-dir R:\...\OCLRuler\
and it seems to work fine... The phantoms are still strolling around, but there are result files in the output folder!
[edit:]
Oups, this is gona become a new question... The ouput files are... EMPTY !!!
As I said, there are still warnings about phantoms and outputs are generated for all files (java and class alike)
Why are they empty ??
[/edit]

This might well be a problem with Soot's source-code frontend, which is heavily outdated by now. I would recommend compiling the .java files to .class and then giving those to Soot.

Related

Win10: ASDF can't load system (ASDF_OUTPUT_TRANSLATION error)

Update 2
I think #faré is right, it's an output translation problem.
So I declared the evironment variable ASDF_OUTPUT_TRANSLATIONS and set it to E:/. Now (asdf:require-system "my-system") yields a different error: Uneven number of components in source to destination mapping: "E:/" which led me to this SO-topic.
Unfortunately, his solution doesn't work for me. So I tried the other answer and set ASDF_OUTPUT_TRANSLATIONS to (:output-translations (t "E:/")). Now I get yet another error:
Invalid source registry (:OUTPUT-TRANSLATIONS (T "E:/")).
One and only one of
:INHERIT-CONFIGURATION or
:IGNORE-INHERITED-CONFIGURATION
is required.
(will be skipped)
Original Posting
I have a simple system definition but can't get ASDF to load it.
(asdf-version 3.1.5, sbcl 1.3.12 (upgraded to 1.3.18 AMD64), slime 2.19, Windows 10)
What I have tried so far
Following the ASDF manual: "4.1 Configuring ASDF to find your systems"
There it says:
For Windows users, and starting with ASDF 3.1.5, start from your
%LOCALAPPDATA%, which is usually ~/AppData/Local/ (but you can ask in
a CMD.EXE terminal echo %LOCALAPPDATA% to make sure) and underneath
create a subpath config/common-lisp/source-registry.conf.d/
That's exactly what I did:
Echoing %LOCALAPPDATA% which evaluates to C:\Users\my-username\AppData\Local
Underneath I created the subfolders config\common-lisp\source-registry.conf.d\ (In total: C:\Users\my-username\AppData\Local\config\common-lisp\source-registry.conf.d\
The manual continues:
there create a file with any name of your choice but with the type conf, for instance 50-luser-lisp.conf; in this file, add the following line to tell ASDF to recursively scan all the subdirectories under /home/luser/lisp/ for .asd files: (:tree "/home/luser/lisp/")
That’s enough. You may replace /home/luser/lisp/ by wherever you want to install your source code.
In the source-registry.conf.d folder I created the file my.conf and put in it (:tree "C:/Users/my-username/my-systems/"). This folder contains a my-system.asd.
And here comes the weird part:
If I now type (asdf:require-system "my-system") in the REPL I get the following error:
Can't create directory C:\Users\my-username\AppData\Local\common-lisp\sbcl-1.3.12-win-x86\C\Users\my-username\my-systems\C:\
So the problem is not that ASDF doesn't find the file, it does -- but (whatever the reason) it tries to create a really weird subfolder hierarchy which ultimately fails because at the end it tries to create the folder C: but Windows doesn't allow foldernames containing a colon.
Another approach: (push path asdf:*central-registry*)
If I try
> (push #P"C:/Users/my-username/my-systems/" asdf:*central-registry*)
(#P"C:/Users/my-username/my-systems/"
#P"C:/Users/my-username/AppData/Roaming/quicklisp/quicklisp/")
> (asdf:require-system "my-system")
I get the exact same error.
I don't know what to do.
Update
Because of the nature of the weird path ASDF was trying to create I thought maybe I could bypass the problem by specifying a relative path instead of an absolute one.
So I tried
  (:tree "\\Users\\my-username\\my-systems")
in my conf file. Still the same error.
Ahem. It looks like an output-translations problem.
I don't have a Windows machine right now, but this all used to work last time I tried.
Can you setup some ad hoc output-translations for now that will make it work?

Lua: error loading module file not found

I'm trying to set up Lighttpd + lua + fastcgi to run web interface on an embedded MIPS board. But the important part here, I guess is Lua.
When trying to run /usr/local/bin/wsapi.fcgi (which is lua script) I get this error:
/usr/bin/lua: error loading module 'lfcgi' from file '/usr/local/lib/lua/5.1/lfcgi.so':
File not found
stack traceback:
[C]: ?
[C]: in function 'require'
/usr/local/share/lua/5.1/wsapi/fastcgi.lua:9: in main chunk
[C]: in function 'require'
/usr/local/bin/wsapi.fcgi:9: in main chunk
[C]: ?
Which is really strange, because ls shows that file is there and all permissions are ok:
# ls -l /usr/local/lib/lua/5.1/lfcgi.so
-rwxr-xr-x 1 0 0 21152 /usr/local/lib/lua/5.1/lfcgi.so
And which is more frustrating, if I actually remove the file, lua shows a different error, which means that first error wasn't really caused by lua unable to locate file properly.
So I'm a bit lost here, looks like the error message is misleading and problem isn't really the file being not found, but what...
P.S. The error comes from file wsapi/fastcgi.lua, from line 9 which looks like this:
local lfcgi = require"lfcgi"
- maybe there is something wrong with require syntax? I'm no expert in lua so I can't tell.
Ok, I figured it out. It turned out to be a missing dependency, as #Ctx suggested.
readelf -d lfcgi.so | grep NEEDED
shows that it needs libfcgi.so.0 which is a symlink to libfcgi.so and I only have the last one, not the symlink.
After creating the symlink it is working now (actually it comes with another error, but it is a different story :P).
By the way - the error message is really confusing - it looks like the file lfcgi.so is missing, when in fact it is one of its dependencies that is causing the problem.

"Unrecognized option: --format=COBERTURAXML" in trying to convert JSCover report to cobertura xml

I'm trying to convert JSCover to cobertura xml.
Based on what i've read the command is as follows:
java -cp JSCover-all.jar jscover.report.Main --format=COBERTURAXML REPORT-DIR SRC-DIRECTORY
But I get an error
"Error: Could not find or load main class jscover.report.Main"
Even if I set the fully qualified path of there the JSCover-all.jar is located.
So I tried including the JSCover-al.jar into the classpath and run the following command instead:
java -cp jscover.report.Main --format=COBERTURAXML target/local-storage-proxy target/local-storage-proxy/original-src
I no longer get the first error but i'm now getting the following error:
Unrecognized option: --format=COBERTURAXML
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I hope someone could help me with it. Many thanks!
The first attempt is the correct approach. The error means that JSCover-all.jar is not in the same directory that you are executing the command from. An absolute path to is not needed - a relative one will do.
In the second approach, you have passed 'jscover.report.Main' as the class-path to the JVM and '--format=COBERTURAXML' as parameter to the 'java' command.

Error grabbing Grapes ... unresolved dependency ... not found

UPDATE 8/6:
The beefed up logging has shown me that there is an issue deleting the old jar from the cache, which leads to the fatal "not found" error. There are other threads similar to this, but only when someone is locking the file with their IDE. We are running a single groovy script from Jenkins, and no one is logged into this box.
We ran process explorer right after the failure and there were no locks. Then I login with the user that Jenkins is using to run the script, and I get no error deleting the files.
Also it seems there was a fix in IVY 2.1 to not fail when the jar cannot be deleted, and I'm on Ivy 2.2 (Groovy 1.8.4). What gives?
Couldn't delete outdated artifact from cache: C:\Users\myUser\.groovy\grapes\com.a.b.c\x-y-z\jars\x-y-z-1.496.jar
then the false(?) error:
Caught: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
Caused by: java.lang.RuntimeException: Error grabbing Grapes -- [unresolved dependency: com.a.b.c#x-y-z;1.+: not found]
at smokeTestSuccess.<clinit>(smokeTestSuccess.groovy)
Interestingly enough, this happens everyday the first time the script is run after 5am. I guess the cache gets invalidated through some default config at 5am? Is this some kind of clue??
Original post:
I am intermittently getting an error when running a number of different Groovy scripts which all share an identical #Grab declaration. (file names changed to protect the innocent). First the full Grab declaration:
#GrabResolver(name = 'libs.release', root = 'http://myserver:8081/artifactory/libs-release', m2compatible = 'true') #Grapes([
#Grab(group = 'com.a.b.c, module = 'x-y-z', version = '1.+', changing = true),
#Grab('commons-lang:commons-lang:2.3'),
#Grab('log4j:log4j:1.2.16'),
#Grab('gpars:gpars:0.12'),
#Grab('jsr166y:jsr166y:1.7.0'),
#Grab('org.codehaus.groovy.modules.http-builder:http-builder:0.6'),
#Grab('org.apache.commons:commons-collections:3.2.1'),
#Grab('org.apache.httpcomponents:httpclient:4.2.2'),
#Grab('org.apache.httpcomponents:httpcore:4.2.3'),
#Grab('org.cyberneko.html:nekohtml:1.9.17'),
#Grab('xerces:xercesImpl:2.11.0'),
]) #GrabConfig(systemClassLoader = true)
Then the error:
Caught: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
Caused by: java.lang.RuntimeException: Error grabbing Grapes -- [unresolved dependency: com.a.b.c#x-y-z;1.+: not found]
Upon doing numerous internet searches, the cause always seems to be very simple, either one of these two basic problems:
1. Repository unreachable
2. Jar file doesn’t exist
However, in the artifactory logs, I've proven that the file is actually being downloaded:
*Artifactory did accept the request for download:
2014-07-17 07:58:19,938 [ACCEPTED DOWNLOAD] libs-release-local:com/a/b/c/x-y-z/1.477/x-y-z-1.477.jar for anonymous/165.226.40.155.
*Artifactory did deliver jar:
20140717075820|156|REQUEST|165.226.40.155|non_authenticated_user|GET|/libs-release/com/a/b/c/x-y-z/1.477/x-y-z-1.477.jar|HTTP/1.1|200|1276695
The scripts all work about 100% of the time if they are simply restarted. This all leads me to believe that the issue is the Grab timing out. Theoretically the second time I run the script, the file is in the cache, and things happen faster, thus it doesnt fail.
For the above real request, I can see about 20 seconds of elapsed time in the http log from request to download.
Questions:
Does my theory seem correct?
Is there a way to increase the amount of time that the script will wait for the #Grab to resolve?
Does putting a try / catch block around the #Grab statements seem like a good idea? Or will that just hide the real problem?
thanks in advance!!!!
I think I finally figured out the answer to my own question.
I believe there is some sort of bug within Groovy 1.8.4 (or Ivy 2.2), especially since this behavior does mirror an ancient documented Ivy bug with this exact error message scheme and behavior.
Upgrading to Groovy 2.3.6 (which includes Ivy 2.3) appears to solve the issue.
I also still have no idea why the jars cannot be deleted, nothing is locking them. I experimented with moving the grape cache to a less secure folder to rule out a permission issue, but this didn't help:
-Dgrape.root=D:\Temp\grapeCache
UPDATE 8/19:
Once we upgraded to Groovy 2.3.6, the error went away, but I then figured out that the jar was no longer being downloaded at all, when using the "1.+" resolver. Something in the defaultgrapeConfig.xml was causing an issue. Everything is finally working properly after (in addition to the Groovy upgrade) we overrode defaultgrapeConfig.xml with our own stripped down file using this command line JAVA_OPT:
-Dgrape.config=D:\Temp\myGrapeConfig.xml
which had these contents:
<ivysettings>
<settings defaultResolver="downloadGrapes"/>
<resolvers>
<chain name="downloadGrapes">
</chain>
</resolvers>
</ivysettings>
ALSO:
For completeness (further steps):
In Jenkins GUI, update the job(s):
a. Update the drop down for each script: Execute Groovy Script > Groovy Version > Groovy-2.3.6
b. Update the JAVA_OPTS for each script (have to click the ‘advanced’ button under the script to see JAVA_OPTS):
-Dgrape.config=D:\Software\SfGrapeConfig.xml
Optional logging switches: -Dgroovy.grape.report.downloads=true -Divy.message.logger.level=4
In the actual Groovy script itself, delete this option within the #GrabResolver annotation: , m2compatible = 'true'
If you get this or a similar error:
"could not find client or server jvm under [Whatever JAVE_HOME is], please check that it is a valid jdk / jre containing the desired type of jvm"
Delete groovy.exe & groovyw.exe from D:\Software\Groovy-2.3.6\bin (if the exe’s do not exist, the Jenkins groovy plugin will use the bat file versions of these, and they handle the 32-bit / 64-bit problem better than the exe’s)

java will intermittently not resolve symlinks on Linux

I'm trying to resolve canonical paths for all the files in a folder tree, but for some reason it will not resolve them (and intermittently the JVM security code will resolve the symlink properly within the FilePermission and cause a security error).
Env:
$ java -version
java version "1.6.0_23"
OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.2)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)
A known symlink in the system is /usr/share/java/gnome-java-bridge.jar:
$ ls -l /usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar
lrwxrwxrwx 1 root root 50 2012-02-24 13:39 /usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar -> ../../../../../../share/java/gnome-java-bridge.jar
The following code should resolve this known symlink:
String symlinkedFilePath =
"/usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar";
File symlinkedFile = new File(symlinkedFilePath);
System.out.println(symlinkedFile.getAbsolutePath());
System.out.println(symlinkedFile.getCanonicalPath());
but produces:
/usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar
/usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar
A further test, using the following code, will sometimes return true for the permission check, but sometimes will return false:
String symlinkedFilePath =
"/usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar";
File symlinkedFile = new File(symlinkedFilePath);
FilePermission recursivePermission = new FilePermission(
symlinkedFile.getParentFile().getParent() + "/-", "read");
FilePermission filePermission = new FilePermission(
symlinkedFile.getAbsolutePath(), "read");
System.out.println(recursivePermission);
System.out.println(filePermission);
System.out.println(
"Can read symlink: " + recursivePermission.implies(filePermission));
The typical result is:
(java.io.FilePermission /usr/lib/jvm/java-6-openjdk/jre/lib/- read)
(java.io.FilePermission /usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar read)
Can read symlink: true
but when debugging, if I step through the creation of the FilePermission on the target file, internally the path is resolved to the symlink, and the output results in:
(java.io.FilePermission /usr/lib/jvm/java-6-openjdk/jre/lib/- read)
(java.io.FilePermission /usr/lib/jvm/java-6-openjdk/jre/lib/ext/gnome-java-bridge.jar read)
Can read symlink: false
The problem is that within the context of the app in which the permission checking actually takes place, the symlink is always resolved by the FilePermission object, but never by my own calls to file.getCanonicalPath() as demonstrated above.
Does this make sense to anyone?
A colleague of mine confirmed the issue on OpenJDK 6u23, but not on any prior or following versions. That being said, since the issue has
A) a work around in the form of the system property
-Dsun.io.useCanonCaches=false
OR
-Dsun.io.useCanonPrefixCache=false
B) appears to be resolved in the later build (u24)
there appears to be little motivation to dig any deeper.
In Unix a symlink is a "special" file, with its own permissions.
The fact that you have read permission on the symbolic link doesn't imply you'll have it for the file linked.
My guess here is that you are running your program as an user that can read the symlink but not the actual file.
When entering debug mode you trigger the call to some method that change the internal state of the FilePermission object making it resolve to the actual file and thus returning you "false".
When you get "true" it's just telling you that you can read the symbolic link.
In your place, I'll check permission on this file:
- /usr/share/java/gnome-java-bridge.jar
and to the two directories:
- /usr/share
- /usr/share/java

Resources