I am using hadoop-2.7.1,hbase-1.0.1.1, and zookeeper-3.4.6 on my linux server to compare HBase performance. My Hadoop, HBase, ZooKeeper are working fine with the below process:
19639 DataNode
19893 SecondaryNameNode
20116 ResourceManager
20530 QuorumPeerMain
20287 NodeManager
23767 Client
20838 HMaster
21015 HRegionServer
24620 Jps
19446 NameNode
In addition, YCSB also working fine. I have checked with BasicDb command './bin/ycsb load basic -P workloads/workloada'. However, while I am trying to run for the HBase with the simplest command './bin/ycsb load hbase -P workloads/workloada -p columnfamily=family'. It is not responding at all. I don't know why I'm having this problem. Could you please help me out problem this problem? Thanks in advance...
It has been solved the problem. conf/hbase-site.xml had problem, it wasn't getting right Zookeeper clientport. Default 2181 is much better to use.
Related
I'm trying to start Spark Thrift Server using
D:\spark\spark-2.3.2-bin-hadoop2.7\bin>spark-class org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 spark-internal
in cmd.
However, after I reach the below line, cmd hangs forever. Does anyone know the reason? Thanks for any advice.
INFO ThriftCLIService:98 - Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
The reason is simple - server is started and waits for the connections on port 10000. Try beeline or other JDBC client and connect to jdbc:hive2://localhost:10000 (in different terminal window/tab).
I am trying to bring up datastax cassandra in analytics mode by using "dse cassandra -k -s". I am using DSE 5.0 sandbox on a single node setup.
I have configured the spark-env.sh with SPARK_MASTER_IP as well as SPARK_LOCAL_IP to point to my LAN IP.
export SPARK_LOCAL_IP="172.40.9.79"
export SPARK_MASTER_HOST="172.40.9.79"
export SPARK_WORKER_HOST="172.40.9.79"
export SPARK_MASTER_IP="172.40.9.79"
All above variables are setup in spark-env.sh.
Despite these, the worker will not come up. It is always looking for a master at 127.0.0.1.This is the error i am seeing in /var/log/cassandra/system.log
WARN [worker-register-master-threadpool-8] 2016-10-04 08:02:45,832 SPARK-WORKER Logging.scala:91 - Failed to connect to master 127.0.0.1:7077
java.io.IOException: Failed to connect to /127.0.0.1:7077
Result from dse client-tool shows 127.0.0.1
$ dse client-tool -u cassandra -p cassandra spark master-address
spark://127.0.0.1:7077
However i am able to access the spark web UI from the LAN IP 172.40.9.79
Spark Web UI screenshot
Any help is greatly appreciated
Try add in file spark-defaults.conf this parameter: spark.master local[*] or spark.master 172.40.9.79. Maybe this solves your problem
I have a DSE 4.5 installation with spark running. I need some help in passing the username / password of cassandra cluster from Spark Shell.
I have added these properties to conf/spark-default.conf file
spark.cassandra.auth.username=user
spark.cassandra..auth.password=pass
And start up my spark shell using
dse spark
But still seeing the error when I try sc.cassandraTable
com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host /11.111.11.11:9042: Host /11.111.11.11:9042 requires authentication, but no authenticator found in Cluster configuration
at com.datastax.driver.core.AuthProvider$1.newAuthenticator(AuthProvider.java:38)
at com.datastax.driver.core.Connection.initializeTransport(Connection.java:138)
at com.datastax.driver.core.Connection.<init>(Connection.java:111)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:432)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:216)
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:171)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1104)
looks like you can execute this command
dse spark -Dcassandra.username=user -Dcassandra.password=pass
ref:
http://docs.datastax.com/en/datastax_enterprise/4.5/datastax_enterprise/sec/secIntrnlAuth.html?scroll=secItrnlAuth__authentication-for-hadoop-tools
This worked for me:
dse -u cassandra -p cassandra spark
While I'm starting Spark shell:
bin>./spark-shell
I get the following error :
Spark assembly has been built with Hive, including Data nucleus jars on classpath
Welcome to SPARK VERSION 1.3.0
Using Scala version 2.10.4 (Java HotSpot(TM) Server VM, Java 1.7.0_75)
Type in expressions to have them evaluated.
Type :help for more information.
15/05/10 12:12:21 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/10 12:12:21 ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
I have installed spark by follow below link :- http://www.philchen.com/2015/02/16/how-to-install-apache-spark-and-cassandra-stack-on-ubuntu
You should supply your Spark Cluster's Master URL when start a spark-shell
At least:
bin/spark-shell --master spark://master-ip:7077
All the options make up a long list and you can find the suitable ones yourself:
bin/spark-shell --help
I am assuming that you are running this in standalone/local mode.
Run your spark shell with following line. That indicates you are using all the available cores of your master which is local machine.
bin/spark-shell --master local[*]
http://spark.apache.org/docs/1.2.1/submitting-applications.html#master-urls
You also need to start spark master and slave before giving spark-submit command
start-master.sh
start-slave.sh spark://spark:7077
then use
spark-submit --master spark://spark:7077
Look at your log files for "permission denied" errors... It may happens that your client service doesn't have the proper authority to access your Master folders.
With some research over the internet, I can use
sbin/start-master.sh
to start the spark master server spark service over my Ubuntu Linux computers
and use
bin/spark-class org.apache.spark.deploy.worker.Worker spark://...
for the slave nodes service up and running.
The good news was I can see the local web page with works found alive.
However, after such, I tried to launch the shell to work ...
MASTER=spark://localhost:7077 bin/spark-shell
but it returned:
sparkMaster#localhost:7077 ...
And therefore I modified the code to
MASTER=spark://sparkuser#localhost:7077 bin/spark-shell
where the sparkuser is the one connected to the two nodes
However, with such modification, I got:
ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.
ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
and when I tried
MASTER=local-cluster[3,2,1024] bin/spark-shell
It did come out with the spark logo in the shell but I was afraid the slave nodes were not binding in.
Did I miss anything for the Spark cluster setting?
Just launch spark-shell on cluster with --master flag as follows
./spark-shell --master spark://localhost:7077 bin/spark-shell