For my project, I need to have a certain jhipster version.
But I also want to play around with all the new jhipster features (version 5+).
Therefore, my idea is to run the newest jhipster inside a docker container, so nothing intereferes with my project setup.
I followed the offical jhipster install guide.
The installation was succesful.
Now comes there part where I am stuck.
Using ./gradlew test, all tests fail.
I tried the ./gradlwe test in both directories, meaning once in the docker container directory (docker container exec -it jhipster bash --> then inside the app folder doing yarn install and then ./gradlew test)
and from my local directory where the volume got mounted (~/jhipster).
What am I missing?
For anyone interested in seeing the tests failing (this is how all of them look like):
Task :test
org.jhipster.health.web.rest.LogsResourceIntTest > changeLogs FAILED
java.lang.IllegalStateException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.BeanCreationException
Caused by: io.jsonwebtoken.security.WeakKeyException
org.jhipster.health.web.rest.LogsResourceIntTest > getAllLogs FAILED
java.lang.IllegalStateException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.BeanCreationException
Caused by: io.jsonwebtoken.security.WeakKeyException
org.jhipster.health.web.rest.LogsResourceIntTest > testLogstashAppender FAILED
java.lang.IllegalStateException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.BeanCreationException
Caused by: io.jsonwebtoken.security.WeakKeyException
org.jhipster.health.web.rest.AccountResourceIntTest > testSaveInvalidEmail FAILED
java.lang.IllegalStateException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.BeanCreationException
Caused by: io.jsonwebtoken.security.WeakKeyException
137 tests completed, 113 failed
> Task :test FAILED
FAILURE: Build failed with an exception.
I solved this issue, even though I do not understand HOW this is a fix.
Since I have a working .yo-rc.json I used docker cp to copy this file into the docker container and then ran jhipster.
Now I removed that file and ran a blank jhipster with the same configuration parameters and after this all gradlew test did pass.
Related
I got the following error when starting the spark-shell. I'm going to use Spark to process data in SQL Server. Can I ignore the errors?
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState'
Caused by: java.lang.reflect.InvocationTargetException: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog':
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog'
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog'
Caused by: java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
Caused by: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
tl;dr You'd rather not.
Well, it may be possible, but given you've just started your journey to Spark's land the efforts would not pay off.
Windows has never been a developer-friendly OS to me and whenever I teach people Spark and they use Windows I just take it as granted that we'll have to go through the winutils.exe setup but many times also how to work on command line.
Please install winutils.exe as follows:
Run cmd as administrator
Download winutils.exe binary from https://github.com/steveloughran/winutils repository (use hadoop-2.7.1 for Spark 2)
Save winutils.exe binary to a directory of your choice, e.g. c:\hadoop\bin
Set HADOOP_HOME to reflect the directory with winutils.exe (without bin), e.g. set HADOOP_HOME=c:\hadoop
Set PATH environment variable to include %HADOOP_HOME%\bin
Create c:\tmp\hive directory
Execute winutils.exe chmod -R 777 \tmp\hive
Open spark-shell and run spark.range(1).show to see a one-row dataset.
I got the following error when starting the spark-shell. I'm going to use Spark to process data in SQL Server. Can I ignore the errors?
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState'
Caused by: java.lang.reflect.InvocationTargetException: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog':
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog'
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog'
Caused by: java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
Caused by: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
tl;dr You'd rather not.
Well, it may be possible, but given you've just started your journey to Spark's land the efforts would not pay off.
Windows has never been a developer-friendly OS to me and whenever I teach people Spark and they use Windows I just take it as granted that we'll have to go through the winutils.exe setup but many times also how to work on command line.
Please install winutils.exe as follows:
Run cmd as administrator
Download winutils.exe binary from https://github.com/steveloughran/winutils repository (use hadoop-2.7.1 for Spark 2)
Save winutils.exe binary to a directory of your choice, e.g. c:\hadoop\bin
Set HADOOP_HOME to reflect the directory with winutils.exe (without bin), e.g. set HADOOP_HOME=c:\hadoop
Set PATH environment variable to include %HADOOP_HOME%\bin
Create c:\tmp\hive directory
Execute winutils.exe chmod -R 777 \tmp\hive
Open spark-shell and run spark.range(1).show to see a one-row dataset.
I got the following error when starting the spark-shell. I'm going to use Spark to process data in SQL Server. Can I ignore the errors?
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState'
Caused by: java.lang.reflect.InvocationTargetException: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog':
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog'
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog'
Caused by: java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
Caused by: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F C:\tmp\hive
tl;dr You'd rather not.
Well, it may be possible, but given you've just started your journey to Spark's land the efforts would not pay off.
Windows has never been a developer-friendly OS to me and whenever I teach people Spark and they use Windows I just take it as granted that we'll have to go through the winutils.exe setup but many times also how to work on command line.
Please install winutils.exe as follows:
Run cmd as administrator
Download winutils.exe binary from https://github.com/steveloughran/winutils repository (use hadoop-2.7.1 for Spark 2)
Save winutils.exe binary to a directory of your choice, e.g. c:\hadoop\bin
Set HADOOP_HOME to reflect the directory with winutils.exe (without bin), e.g. set HADOOP_HOME=c:\hadoop
Set PATH environment variable to include %HADOOP_HOME%\bin
Create c:\tmp\hive directory
Execute winutils.exe chmod -R 777 \tmp\hive
Open spark-shell and run spark.range(1).show to see a one-row dataset.
I'm very newbie with Linux/Java/Scriptella and I'm trying a jdbc connection with scriptella on a Firebird local database, but I'm receiving the following errors:
2-dic-2013 1.03.34 <INFO> Execution Progress.Initializing properties: 1%
2-dic-2013 1.03.34 <GRAVE> Script /home/maurizio/Scrivania/JATROPHA/applicazioni/prova_per_scriptella.etl execution failed.
javax/resource/ResourceException
2-dic-2013 1.03.34 <GRAVE> Scriptella bug report. Submit to issue tracker.
Scriptella version: 1.1
Exception:
scriptella.execution.EtlExecutorException: javax/resource/ResourceException
at scriptella.execution.EtlExecutor.execute(EtlExecutor.java:190)
at scriptella.tools.launcher.EtlLauncher.execute(EtlLauncher.java:276)
at scriptella.tools.launcher.EtlLauncher.launch(EtlLauncher.java:193)
at scriptella.tools.launcher.EtlLauncher.main(EtlLauncher.java:321)
Caused by: java.lang.NoClassDefFoundError: javax/resource/ResourceException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at scriptella.core.DriverFactory.getDriver(DriverFactory.java:53)
at scriptella.core.ConnectionManager.<init>(ConnectionManager.java:70)
at scriptella.core.Session.<init>(Session.java:51)
at scriptella.execution.EtlExecutor.prepare(EtlExecutor.java:248)
at scriptella.execution.EtlExecutor.execute(EtlExecutor.java:178)
... 3 more
Caused by: java.lang.ClassNotFoundException: javax.resource.ResourceException
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
... 10 more
I'm using Ubuntu 10.04 Lucid Lynx.
I start scriptella via console in directory /home/maurizio/Scrivania/JATROPHA/applicazioni/ with the command scriptella/scriptella-1.1/bin/scriptella.sh -debug "prova_per_scriptella.etl"
My ETL file prova_per_scriptella.etl contains the following rows:
<!DOCTYPE etl SYSTEM "http://scriptella.javaforge.com/dtd/etl.dtd">
<etl>
<description>Prova connessione Firebird</description>
<connection
id="fb_destination"
driver="org.firebirdsql.jdbc.FBDriver"
url="jdbc:firebirdsql:localhost/3050:/home/maurizio/Scrivania/JATROPHA/db/jatrofa.fdb"
user="user"
password="password"
classpath="/home/maurizio/Scrivania/JATROPHA/applicazioni/jaybird/Jaybird-2.2.3JDK_1.6/jaybird-2.2.3.jar"
/>
</etl>
The env var $_SCRIPTELLA_CP of batch command scriptella/scriptella-1.1/bin/scriptella.sh results in
:/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/commons-jexl.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/commons-logging.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/jaybird-2.2.3.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/jsqlparser-0.8.0.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/scriptella-core.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/scriptella-drivers.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/scriptella-tools.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/sqlsheet-6.5.jar
Any help would be very appreciated.
Thanks in advance.
You are missing the required dependency connector-api-1.5.jar, or you need to use jaybird-full-2.2.3.jar (which includes both the normal Jaybird and connector-api). See the releasenotes of Jaybird 2.2.3
The error means that you probably miss additional J2EE classes on classpath. Try downloading mini-j2ee.jar from http://www.firebirdsql.org/en/jdbc-driver/ and adding it to classpath attribute in etl.xml:
classpath="/path/to/mini-j2ee.jar:/home/maurizio/Scrivania/JATROPHA/applicazioni/jaybird/Jaybird-2.2.3JDK_1.6/jaybird-2.2.3.jar"
I am trying to run Cassandra-0.8.5, Hadoop 0.2.0 and Pig 0.8.1. I run a very simple pig scripts as
rows = LOAD 'cassandra://pygmalion/$CF' USING CassandraStorage() AS (key, columns: bag {T: tuple(name, value)});
counted = foreach (group rows all) generate COUNT($1);
dump counted;
it worked well if i run local mode. But if I run mapreduce mode, i alway get error message as below, i run out of idea. any help or hind will be greatly appreciated.
pig_813399.log:
Backend error message
java.io.IOException: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit.readFields(PigSplit.java:225)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:349)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:611)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264) Caused by: java.lang.ClassNotFoundException: org.apache.c
Pig Stack Trace
ERROR 2997: Unable to recreate exception from backed error: java.io.IOException: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias counted. Backend error : Unable to recreate exception from backed error: java.io.IOException: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
at org.apache.pig.PigServer.openIterator(PigServer.java:753)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:615)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:303)
at org.apache.pig.tools.grunt.GruntParser.loadScript(GruntParser.java:477)
at org.apache.pig.tools.grunt.GruntParser.processScript(GruntParser.java:422)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.Script(PigScriptParser.java:692)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:425)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:168)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:144)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:76)
at org.apache.pig.Main.run(Main.java:455)
at org.apache.pig.Main.main(Main.java:107)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2997: Unable to recreate exception from backed error: java.io.IOExcepti
on: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getErrorMessages(Launcher.java:221)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getStats(Launcher.java:151)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:337)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:382)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1209)
at org.apache.pig.PigServer.storeEx(PigServer.java:885)
at org.apache.pig.PigServer.store(PigServer.java:827)
at org.apache.pig.PigServer.openIterator(PigServer.java:739)
It sounds like you need to update the HADOOP_CLASSPATH environment variable to include the cassandra jars. You will need to restart the hadoop processes after you do.
export HADOOP_CLASSPATH=/path/to/cassandra/lib/*:$HADOOP_CLASSPATH
See http://wiki.apache.org/cassandra/HadoopSupport for other possible problems with running hadoop/pig against cassandra.