I can't run cassandra as daemon. I set variable JAVA_HOME, CASSANDRA_HOME,PATH for cassandra. To running I use Apache Commons and tutorial link
but when i try started I see in console Error: Could not find or load main class org.apache.cassandra.service.CassandraDaemon
Tested on JDK 8 and 7
I do not know what's going on
I found error bug on CASSANDRA-7477
Related
Installed it, Datastax CE, at first came up, but OpsCenter said agent was not up, although it seemed to be running.
So I reboot, hoping it might be happier. Opposite. Now Cassandra Service and Agent won't even start.
Going into logs I see
2015-07-28 16:12:47 Commons Daemon procrun stdout initialized
Error occurred during initialization of VM
java/lang/NoClassDefFoundError: java/lang/Object
Any idea what else I have to do? I have JDK 1.8, Eclipse etc. Nothing else complains.
Thanks
If cassandra won't start do the following:
1) try renaming the following path
C:\Program Files\DataStax-DDC\data\commitlog
2) if it doesn't work start cassandra as follows:
C:\Program Files\DataStax-DDC\apache-cassandra\bin>cassandra.bat
the console should tell you where the error is.
Good luck!
I have a spark cluster running on 10 machines (1 - 10) with the master at machine 1. All of these run on CentOS 6.4.
I am trying to connect a jupyterhub installation (which is running inside a ubuntu docker because of issues with installing on CentOS), using sparkR, to the cluster and get the spark context.
The code I am using is
Sys.setenv(SPARK_HOME="/usr/local/spark-1.4.1-bin-hadoop2.4")
library(SparkR)
sc <- sparkR.init(master="spark://<master-ip>:7077")
The output I get is
attaching package: ‘SparkR’
The following object is masked from ‘package:stats’:
filter
The following objects are masked from ‘package:base’:
intersect, sample, table
Launching java with spark-submit command spark-submit sparkr-shell/tmp/Rtmpzo6esw/backend_port29e74b83c7b3 Error in sparkR.init(master = "spark://10.10.5.51:7077"): JVM is not ready after 10 seconds
Error in sparkRSQL.init(sc): object 'sc' not found
I am using Spark 1.4.1. The spark cluster is also running CDH 5.
The jupyterhub installation can connect to the cluster via pyspark and I have python notebooks which use pyspark.
Can someone tell me what I am doing wrong?
I have a similar problem and have searching all around but no solutions. Can you please tell me what do you mean by "jupyterhub installation (which is running inside a ubuntu docker because of issues with installing on CentOS), "?
We have 4 clusters too on CentOS 6.4. One of my other problem is that how do use an IDE like IPython or RStudio to interact with these 4 servers? Do I use my laptop to connect to these servers remotely (if yes, then how?) and if no then what can be the other solution.
Now to answer your question, I can give it a try. I think the you have to use --yarn-cluster option as stated here I hope this helps you solving the problem.
Cheers,
Ashish
I want to try Cassandra. When trying to run Cassandra, writes an error:
Error: Could not find or load main class org.apache.cassandra.service.CassandraDaemon
What's the problem?
Java -version
Java version "1.7.0_67"
Java (TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot (TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
python --version
Python 2.7.8
You'll get that error when you've downloaded a source distribution of Cassandra but haven't built it or when the CassandraDaemon.class file isn't in your classpath.
For the first problem:
You'll need the JDK 1.7 (which you already have) and ant to build C*.
Navigate to wherever you've extracted cassandra (I'll use ~/cassandra for this explanation) , Run ant and enjoy the awesome.
For the second, if your classpath is setup incorrectly something has gone wrong in the build process or the classpath has been modified. I'd verify that the classpath is what is expected by displaying it in the startup script (the cassandra executable) by adding echo $CLASSPATH near the bottom of the script (in my case it was line 212 for C* 2.1.0).
P.S. On windows you'll need to set CASSANDRA_HOME before being able to run C*.
I just installed Solr on Drupal 7 and now I am trying to run Solr on my server.
when I entered this:-
java -jar start.jar to start Solr, it gave me this error:-
If 'java' is not a typo you can run the following command to lookup the package that contains the binary:
command-not-found java
-bash: java: command not found
I checked if java is installed by typing which java
for which it says:-
which: no java in (/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin)
My question is:-
How can I install and Run Solr under these circumstances
How can I automatically start Solr after server is restarted in future?
Thanks.
You need to install Java SE 7 first before anything else. I recommend installing both JRE and JDK.
I am sure there are guides for you distro on the internet.
I have just installed grails 2.0.3 on my Ubuntu box using apt-get as described here:
http://grails.org/Download
Everything looked fine.
When I type grails in the terminal it takes about 5 seconds and then it returns to the prompt without having done anything. No errors, no text.
I have tried adding GRAILS_HOME even though the download instructions say it is not required but that didn't help either.
It's finding grails just fine, it's just not doing anything.
I have not explicitly installed groovy before this. Is that a step I missed (I don't think so as I see it's included in the libs folder of the install)
or is there more I need to do to finish the install?
I had the same problem, but I managed to solve it.
I have grails 2.1.2 installed in a Ubuntu, running in a VirtualBox instance, with 512MB of memory.
I had to install java 6 to run grails and configure JAVA_HOME environment variable, although grails documentation says it is not necessary.
According to several resources, it may be caused by a grails memory usage issue. It seems that grails uses a lot of memory or something similar. These resources are:
Stack Overflow thread: Grails application hogging too much memory
Grails deployment documentation
Stack Overflow thread: Increase Xmx and Xms for grails run-app
So, following their advice, I configured an environment variable, GRAILS_OPTS with this command:
export GRAILS_OPTS="-server -Xms128M -Xmx128 -XX:MaxPermSize=256M"
Try it out and tell us if it works for you, thanks.