The class edu.mit.csail.sdg.alloy4whole.ExampleUsingTheCompiler provides an example of how to execute Alloy commands from the command-line. The backend solver used in this example is Sat4J. I would love to change the solver to one of the faster ones like Plingeling. Unfortunately I can't work out how to achieve this. Simply changing the line
options.solver = A4Options.SatSolver.SAT4J;
into
options.solver = A4Options.SatSolver.PLingelingJNI;
doesn't work; I get the following error message:
Exception in thread "main" Fatal error:
Unknown exception occurred: kodkod.engine.AbortedException: kodkod.engine.satlab.SATAbortedException: java.io.IOException: Cannot run program "plingeling": error=2, No such file or directory
at edu.mit.csail.sdg.alloy4compiler.translator.TranslateAlloyToKodkod.executeCommand(TranslateAlloyToKodkod.java:1079)
at edu.mit.csail.sdg.alloy4compiler.translator.TranslateAlloyToKodkod.executeCommand(TranslateAlloyToKodkod.java:1065)
at edu.mit.csail.sdg.alloy4compiler.translator.TranslateAlloyToKodkod.execute_command(TranslateAlloyToKodkod.java:381)
at edu.mit.csail.sdg.alloy4whole.ExampleUsingTheCompiler.main(ExampleUsingTheCompiler.java:72)
Caused by: kodkod.engine.AbortedException: kodkod.engine.satlab.SATAbortedException: java.io.IOException: Cannot run program "plingeling": error=2, No such file or directory
at kodkod.engine.Solver.solve(Solver.java:147)
at edu.mit.csail.sdg.alloy4compiler.translator.A4Solution.solve(A4Solution.java:1058)
at edu.mit.csail.sdg.alloy4compiler.translator.TranslateAlloyToKodkod.executeCommand(TranslateAlloyToKodkod.java:1070)
... 3 more
Caused by: kodkod.engine.satlab.SATAbortedException: java.io.IOException: Cannot run program "plingeling": error=2, No such file or directory
at kodkod.engine.satlab.ExternalSolver.solve(ExternalSolver.java:255)
at kodkod.engine.Solver.solve(Solver.java:140)
... 5 more
Caused by: java.io.IOException: Cannot run program "plingeling": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:620)
at java.lang.Runtime.exec(Runtime.java:485)
at kodkod.engine.satlab.ExternalSolver.solve(ExternalSolver.java:221)
... 6 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:248)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 9 more
The Alloy GUI seems to get around this problem by copying some files (including the plingeling executable) into the right place before it runs.
Thanks to the questions linked from Tarciana's answer, I successfully got this working on my machine (a Mac).
In order to use solvers like Plingeling that are distributed as
executables, one should run
export PATH=<path_to_solver_binaries_and_libraries>:$PATH
before running java.
In order to use solvers like MiniSat that are distributed as
dynamic libraries, one should add the argument
-Djava.library.path=<path_to_solver_binaries_and_libraries>
when running java.
I was with the same problem than yours as shown in my question:
Execution Error when change the SATSolver from SAT4J to MiniSAT
The solution pointed by #Aleksandar in a previous question, see
Alloy API resulting in java.lang.UnsatisfiedLinkError
, works for me in an older ubuntu version (10.0.0) but it does not work in earlier ubuntu versions (such as 14.04 or 16.04).
When choosing another solvers such as zchaff or minisatprover i observe that the error changes, for instance:
"The required JNI library cannot be found: java.lang.UnsatisfiedLinkError: no zchaffx5 in java.library.path"
and for all the other solvers it seems that the library it is looking for (ex.: zchaffx5) is more updated than the existing one in the x86-linux folder (inside the alloy-4.2.jar): zchaffx1. I think the existing libraries for the other solvers are outdated. If you achieve to find a solution for this problem, please let us know.
Related
i'm trying to learn a big data online course and came across the problem while installing apache spark.
i've done everything correctly but when i try to run spark-submit it seems that there is an issue with java i guess.
when i run this:
(base) C:\SparkCourse>spark-submit ratings-counter.py
i get this error:
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1095)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.SparkSubmitArguments.$anonfun$loadEnvironmentArguments$3(SparkSubmitArguments.scala:157)
at scala.Option.orElse(Option.scala:447)
at org.apache.spark.deploy.SparkSubmitArguments.loadEnvironmentArguments(SparkSubmitArguments.scala:157)
at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:115)
at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$3.<init>(SparkSubmit.scala:1022)
at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:1022)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:85)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #5b94b04d
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 13 more
any ideas?
Cheers!
I reinstalled windows and started everything from scratch.
Installed jdk version 8, and this version of spark "spark-3.0.3-bin-hadoop2.7.tgz". Indicated all the paths correctly and. It worked as i can open pyspark shell and do spark-submit for example, but there still is a lot of text in the cmd that i can't get rid of.
When I use cargo run I get this error:
warning: unused manifest key: package.author
error: failed to lock file: C:\Users\lucas\Desktop\Work\Coding\melb_os\Cargo.lock
Caused by:
Denied Access. (os error 5)
I'm using rustc 1.57.0-nightly on a windows machine. My antivirus popped a warning when I ran the code. I had compiled the program before and didn't get any problems, but this time it happened and doesn't seem to work anymore.
I disabled the antivirus, added rustc to the path, tried to run it with other IDE, but changed nothing.
I found the log where Norton blocked rustc-nightly without my knowledge and white listed it. If you have this problem just enter your anti-virus history and look for the log of the action, then you can undo it.
Downloaded the latest Spark version because of the fix for
ERROR AsyncEventQueue:70 - Dropping event from queue appStatus.
After setting environment variables and running the same code in PyCharm, I'm getting this error, which I can't find a solution of.
Exception in thread "main" java.util.NoSuchElementException: key not found: _PYSPARK_DRIVER_CONN_INFO_PATH
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at org.apache.spark.api.python.PythonGatewayServer$.main(PythonGatewayServer.scala:64)
at org.apache.spark.api.python.PythonGatewayServer.main(PythonGatewayServer.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Any help?
i met this question too. The next is what i do, hoping to help you:
1 . find your spark version, my spark's version is 2.4.3;
2 . find your pyspark version, my pyspark,version is 2.2.0;
3 . reinstall your pyspark as same as the spark's version
pip install pyspark==2.4.3
Then everything is ok. Hope to help you.
I am using Pyspark 2.3.1 with pycharm 2018.1.4 and facing similar issue on my windows machine.
When I run this python file using spark-submit, it get executed successfully.
I have followed below steps
Created a new project in pycharm, lets call it Demo
Goto Settings->Project:Demo->Project Interpreter. Make sure project interpreter is python 2.7
Goto Settings->Project:Demo->Project Structure. Add Content Root.
I have added two content root one pointing to directory where content of apache spark is present and the other location is of py4j-0.10.7.src.zip
In my case these locations are
C:\apache-spark
and
C:\apache-spark\python\lib\py4j-0.10.7-src.zip
Created new python file(Demo1.py) and pasted below content inside it.
from pyspark import SparkContext
sc = SparkContext(master="local", appName="Spark Demo")
rdd = sc.textFile("C:/apache-spark/README.md")
wordsRDD = rdd.flatMap(lambda words: words.split(" "))
wordsRDD = wordsRDD.map(lambda word: (word, 1))
wordsCount = wordsRDD.reduceByKey(lambda x, y: x+y)
print wordsCount.collect()
Running this python file on pycharm gives below error
Exception in thread "main" java.util.NoSuchElementException: key not
found: _PYSPARK_DRIVER_CONN_INFO_PATH
Where as same program when executed from command prompt yields correct result.
C:\Users\manish>spark-submit C:\Demo\demo1.py
Any suggestions to solve this problem?
I have had a similar exception. My problem was running jupyter and spark with different users. When I run them with the same user problem is solved.
Details;
When I updated spark from v2.2.0 to v2.3.1 then run the Jupyter notebook, the error log was as follows;
Exception in thread "main" java.util.NoSuchElementException: key not found: _PYSPARK_DRIVER_CONN_INFO_PATH
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at org.apache.spark.api.python.PythonGatewayServer$.main(PythonGatewayServer.scala:64)
at org.apache.spark.api.python.PythonGatewayServer.main(PythonGatewayServer.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
When I googling I encountered the following link;
spark-commits mailing list archives
In the code
/core/src/main/scala/org/apache/spark/api/python/PythonGatewayServer.scala
there is a change
+ // Communicate the connection information back to the python process by writing the
+ // information in the requested file. This needs to match the read side in java_gateway.py.
+ val connectionInfoPath = new File(sys.env("_PYSPARK_DRIVER_CONN_INFO_PATH"))
+ val tmpPath = Files.createTempFile(connectionInfoPath.getParentFile().toPath(),
+ "connection", ".info").toFile()
According to this change it is created a temp dir and a file in it. My problem was running jupyter and spark with different users. Because of this I think the process could not created the temp file. When I run them with the same user problem solved. I hope it helps.
I had this problem too, and it ended up being the pyspark code I was importing/running from PyCharm was still the spark 2.2 install instead of the spark 2.3 installation that I had updated SPARK_HOME to point to.
Specifically, I added spark-2.2 to my PyCharm project structure and then marked it's python folder a "Sources" so PyCharm would recognize all it's symbols. So the PyCharm code was importing from there, instead of spark-2.3, and the older code didn't set the _PYSPARK_DRIVER_CONN_INFO_PATH environment variable.
If Vezir's answer didn't fix your case, try tracing into the creation SparkContext and compare carefully the path that is being read from as opposed to path of your spark install. Similarly, if you installed pyspark into your python project via pip, make sure you installed 2.3.1 to match your installed spark version.
This can happen when you are running spark 2.3.1 jars with an older version of pyspark (eg: 2.3.0)
I've downloaded a prebuilt version of Spark on my mac (OS Mavericks), but when I try to open an interactive shell, typing bin/pyspark, I get the following error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/launcher/Main
Caused by: java.lang.ClassNotFoundException: org.apache.spark.launcher.Main
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
I have googled every part of the error and checkout out some other stack overflow threads, but I can't find anything that addresses this error. Any idea what's going on/how to fix it?
One idea I have is that scala is a dependency that I need to download separately...but I really don't know.
I had the same issue before, and it turned out to be a permission issue and I'm under user who has no access to the spark files (root downloaded spark).
Another possibility is, you downloaded the source code and did not build the project from source code :P
Hope it helps.
After installing CRF++ toolkit, I try to run the program "test.java" under CRF++-0.54/java folder. For this, I type :
java -cp /home/amira/CRF++-0.54/java/org/chasen/crfpp test
But, I have the following error:
Exception in thread "main" java.lang.NoClassDefFoundError: test
Caused by: java.lang.ClassNotFoundException: test
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
Could not find the main class: test. Program will exit.
In the README file, there is the command java -classpath CRFPP.jar test -d ../dic. But, the problem is that I don't find the classpath of CRFPP.jar. Moreover, I don't understand what ../dic in the command refer to.
Make changes in the Makefile of the java directory as per your machine settings.
Give the correct Java path and the compiler you are using using.
Run make java in the swig directory.
Run make all in the java directory.
Before running make in the java directory, ensure that you have the model file in the proper location, otherwise it won't open the model file.
Run make test in the java directory.