ssh error while starting namenode - linux

I have been trying to start the services in hadoop using the command
start-dfs.sh
it gives me the following results
why is this happening and what should be done?

Related

hadoop connection refused on port 8020

Hadoop, hive, hdfs, spark were running fine on my namenode and were able to connect to datanode as well. But for some reason, the server was shutdown and now when I try to access hadoop filesystem via commands like hadoop fs -ls / or if I try to access hive, connection is refused on port 8020.
I can see that cloudera-scm-server and cloudera-scm-agent services are running fine. I tried to check the status of hiveserver2, hivemetastore, hadoop-hdfs etc. services, but the service status command gives an errors message that these services do not exist.
Also, I tried to look for start-all.sh but could not find it. I ran find / -name start-all.sh command and only the path for start-all.sh in the cloudera parcel directory for spark came up.
I checked the logs in /var/log directory, for hiveserver2 it is pretty clear that the service was shutdown .. other logs are not that clear but I am guessing all the services went down when the server powered off.
Please advise on how to bring up the whole system again. I am unable to access cloudera manager or ambari or anything on the webpages either. Those pages are down too and I am not sure if I even have access to those because I've never tried it before, I've only been working on the linux command line.

Why does stopping Standalone Spark master fail with "no org.apache.spark.deploy.master.Master to stop"?

Stopping standalone spark master fails with the following message:
$ ./sbin/stop-master.sh
no org.apache.spark.deploy.master.Master to stop
Why? There is one Spark Standalone master up and running.
Spark master was started under different user.
/tmp/Spark-ec2-user-org.apache.spark.deploy.master.Master-1.pid
Was not accessible.Had to login under different user who actually started the stand alone cluster manager master.
In my case, I was able to open the master WebUI page on browser where it clearly mentioned that Spark Master is running on port 7077.
However, while trying to stop using stop-all.sh, was facing no org.apache.spark.deploy.master.Master to stop . So I tried a different method - to find what process is running on port 7077 using below command :
lsof -i :7077
I got the result as java with a PID of 112099
Used the below command to kill that process :
kill 112099
After this when I checked the WebUI, it had stopped working. Successfully killed the Spark Master.

Running Cassandra from IntelliJ and get error "connection refused"

I'm trying to experiment with Cassandra on my system with the below configuration:
OS: Ubuntu 16.04 LTS
Kernel: 4.10.1
Cassandra source GitHub link: https://github.com/apache/cassandra
Setup process on IntelliJ link: https://wiki.apache.org/cassandra/RunningCassandraInIDEA
I open a terminal and run the below commands from the root directory of Cassandra source code without any failure:
bin/cassandra -f (it starts Cassandra successfully)
bin/nodetool status (Get information about the node e.g. datacenter, status etc.)
bin/cqlsh (opens the interface for running the queries)
However, when I follow the setup process for IntelliJ, I'm able to start the server from IntelliJ by hitting "Run" but "nodetool status" command always returns the below error:
Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused).
Interestingly, when i run "bin/cqlsh" command, it connects successfully, which means the node is running just fine.
Any idea what might be causing "nodetool status" to return "connection refused"?
I also tried turning the firewall off but didn't help.
Fixed the issue by adding some more VM arguments as below:
Add the following VM arguments if you want to use nodetool
-Djava.rmi.server.hostname=$JMX_HOST (I entered 127.0.0.1)
-Dcom.sun.management.jmxremote.port=$JMX_PORT (I entered 7199)
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
Here is the link for reference: https://wiki.apache.org/cassandra/RunningCassandraInEclipse
These VM arguments are not provided in the wiki page for Cassandra setup with IntelliJ: https://wiki.apache.org/cassandra/RunningCassandraInIDEA (posted in my question as well).

how to start all daemon process in hadoop - like start-all.sh in linux

i have just started to work on hadoop, cygwin in windows 7. i need to know , that is there any method to start all the services using command like- start-all.sh in linux. i used this command in cygwin doesnt work. And if possible pls suggest me any reference to work on hadoop in windows 7 with cygwin.
i need to do following steps every time to start five daemons like
Start the namenode in the first window by executing
cd hadoop
bin/hadoop namenode
Start the secondary namenode in the second window by executing
cd hadoop
bin/hadoop secondarynamenode
Start the job tracker the third window by executing
cd hadoop
bin/haoop jobtracker
Start the data node the fourth window by executing
cd hadoop
bin/haoop datanode
Start the task tracker the fifth window by executing
cd hadoop
bin/haoop tasktracker
pls any body help
Change the for condition to match the path of your Hadoop daemons:
#!/bin/bash
for service in /etc/init.d/hadoop-hdfs-* /etc/init.d/hadoop-0.20-mapreduce-*
do
sudo $service start
done

Shutdown Cassandra server and then restart it in windows 7

I installed single node cluster in my local dev box which is running Windows 7 and it was working fine. Due to some reason, I need to restart my desktop and then after that whenever I am doing like this on the command prompt, it always gives me the below exception-
S:\Apache Cassandra\apache-cassandra-1.2.3\bin>cassandra -f
Starting Cassandra Server
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 7199; nested exception is:
java.net.BindException: Address already in use: JVM_Bind
Meaning port being used somewhere. I have made some changes in cassandra.yaml file so I need to shutdown the Cassandra server and then restart it again.
Can anybody help me with this?
Thanks for the help.
in windows7, with apache cassandra, a pid.txt file gets created at the root folder of cassandra. Give following instruction to stop the server:
d:/cassandra/bin> stop-server -p ../pid.txt -f
Running -f starts the server as a service, you can stop it through the task manager.
It sounds like your Cassandra server starts on it's own as a service in the background when your machine boots. You can configure windows startup services. To run cassandra in the foreground on windows simply use:
> cassandra.bat
If your are using Cassandra bundled with DataStax Community Edition and running as a service on startup of your machine then you can execute following commands to start and stop Cassandra server.
Start command prompt with admin rights
run following commands
net start DataStax_Cassandra_Community_Server
net stop DataStax_Cassandra_Community_Server

Resources