Zeppelin change port already in use by Spark Master - apache-spark

When I start zeppelin on AWS, It starts on port 8080, but there is Spark Master so it says port already in use... I tried changing port in zeppelin in config files, in "zeppelin-site.xml.template"
<property>
<name>zeppelin.server.port</name>
<value>8050</value>
<description>Server port.</description>
</property>
I made it too in "zeppelin-env.sh.template" adding the env line for the same port.
When I start zeppelin, I get OK, but if I see the open ports, It doesn't appear 8050 anywhere, so looks like It is still trying to deploy on port 8080, where Spark Master is...
Someone who got zeppelin not ignoring changed port?
Thanks

You likely need to copy the .template files, e.g. copy your modified zeppelin-env.sh.template to zeppelin-env.sh and zeppelin-site.xml.template to zeppelin-site.xml.

From your zeppelin installation dir (example on my computer its: zeppelin-0.7.3-bin-all):
cp conf/zeppelin-env.sh.template conf/zeppelin-env.sh
vi conf/zeppelin-env.sh
Add the following parameter:
export ZEPPELIN_PORT=8180 # Add this line to zeppelin-env.sh
restart zeppelin you should now be able to access it over:
http://localhost:8180

Indeed, both documented ways should work:
create conf/zeppelin-site.xml with zeppelin.server.port property
create conf/zeppelin-env.sh and export ZEPPELIN_PORT env variable
and restarting Zeppelin

The accepted answer seems to be a little old, so I'm putting here the solution I found, in case it's useful for others:
It is possible to define variables (including port) either conf/zeppelin-env.sh or in conf/zeppelin-site.xml. The zeppelin-env takes priority if both are defined.
Source:
http://zeppelin.apache.org/docs/snapshot/install/configuration.html

Related

Using cqlsh with ssl in Cassandra : changing cqlshrc location

I would like to use cqlsh with ssl. I followed the procedure recommended by the datastax documentation, and it worked well.
However, I would like to change the location of the cqlshrc file, and not place it in /myHomeDirectry/.cassandra as described in the cassandra documentation
how can this be done?
thanks for help..
You can do this by specifying the —-cqlshrc option and new cqlshrc file location when running cqlsh from the command line.
bin/cqlsh 192.168.0.1 -u aaron -p flynnL1ves --cqlshrc=../stackoverflow/cqlshrc
Here's a link to the docs on the Apache Cassandra site for more info: https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html#cqlshrc
Edit -
The only other way to do this, is to modify this line in bin/cqlsh.py:
# BEGIN history/config definition
HISTORY_DIR = os.path.expanduser(os.path.join('~', '.cassandra'))
Cqlsh stores the cqlsh_history file in ~/.cassandra and it also uses that HISTORY_DIR definition to set the default location of the cqlshrc file. Without specifying the cqlshrc file on the command line, you'll need to override that default location by specifying the directory name(s) in the os.path.join above.
Note that this is definitely one of those "proceed at your own risk" moments.

pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuild in windows 10

I have installed spark 2.2 with winutils in windows 10.when i am going to run pyspark i am facing bellow exception
pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder'
I have already tried permission 777 commands in tmp/hive folder as well.but it is not working for now
winutils.exe chmod -R 777 C:\tmp\hive
after applying this the problem remains same. I am using pyspark 2.2 in my windows 10.
Her is spark-shell env
Here is pyspark shell
Kindly help me to figure out
Thankyou
I had the same problem using the command 'pyspark' as well as 'spark-shell' (for scala) in my mac os with apache-spark 2.2. Based on some research I figured its because of my JDK version 9.0.1 which does not work well with Apache-Spark. Both errors got resolved by switching back from Java JDK 9 to JDK 8.
Maybe that might help with your windows spark installation too.
Port 9000?! It must be something Hadoop-related as I don't remember the port for Spark. I'd recommend using spark-shell first that would eliminate any additional "hops", i.e. spark-shell does not require two runtimes for Spark itself and Python.
Given the exception I'm pretty sure that the issue is that you've got some Hive-
or Hadoop-related configuration somewhere lying around and Spark uses it apparently.
The "Caused by" seems to show that 9000 is used when Spark SQL is created which is when Hive-aware subsystem is loaded.
Caused by: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.net.ConnectException: Call From DESKTOP-SDNSD47/192.168.10.143 to 0.0.0.0:9000 failed on connection exception: java.net.ConnectException: Connection refused
Please review the environment variables in Windows 10 (possibly using set command on command line) and remove anything Hadoop-related.
Posting this answer for posterity. I faced the same error.
The way i solved it is by first trying out spark-shell instead of pyspark. The error message was more direct.
This gave a better idea; there was S3 access error.
Next; i checked the ec2 role/instance profile for that instance; it has S3 administrator access.
Then i did a grep for s3:// in all the conf files under /etc/ directory.
Then i found that in core-site.xml there is a property called
<!-- URI of NN. Fully qualified. No IP.-->
<name>fs.defaultFS</name>
<value>s3://arvind-glue-temp/</value>
</property>
Then i remembered. I had removed HDFS as the default file system and set it to S3. I had created the ec2 instance from an earlier AMI and had forgotten to update the S3 bucket corresponding to the newer account.
Once i updated the s3 bucket to the one which is accessible by the current ec2 instance profile; it worked.
To use Spark on Windows OS, you may follow this guide.
NOTE: Ensure that you have correctly resolved your IP address against your hostname as well as localhost, lack of localhost resolution has caused problems for us in the past.
Also, you should provide the full stack trace as it helps to debug the issue quickly and saves the guesswork.
Let me know if this helps. Cheers.
Try this . It worked for me!. Open up a command prompt in administrator mode and then run the command 'pyspark'. This should help open a spark session without errors.
I also come across the error in Unbuntu 16.04:
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder'
this is because I have already run ./bin/spark-shell
So, just kill that spark-shell, and re-run ./bin/pyspark
I also come across the error in MacOS10, and I solved this by use Java8 instead of Java9.
When Java 9 is the default version getting resolved in the environment, pyspark will throw error below and you will see name 'xx' is not defined error when trying to access sc, spark etc. from shell / Jupyter.
more details you can see this link
You must have hive-site.xml file in the spark configuration directory.
Change the port from 9000 to 9083 resolved the problem for me.
Please ensure that the property is updated in both the hive-site.xml files which would be placed under hive config and spark config directory.
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description> </property>
For me in ubuntu, the location for hive-site.xml are:
/home/hadoop/hive/conf/
and
/home/hadoop/spark/conf/

Change port of Jetbrain Upsource

First, i googled about the issue, but couldn't find anything.
Issue:
On installation, i changed the port to 8081. But now its used by another app. So, is there any way to change port of upsource? (Linux)
This is the correct way.
1. ./bin/upsource.sh stop
2../bin/upsource.sh configure --base-url=http://(server-name).com:(port) --listen-port=(port)
3. ./bin/upsource.sh start
4. Head to http://(server-name).com:(port)
You can run the following from your terminal:
./bin/upsource.sh configure --listen-port 1111 --base-url http://servername:1111/
Substituting "1111" for the port you want to use and 'servername' with the baseurl you have chosen for upsource (in my case this was the name of my machine).
Source: https://www.jetbrains.com/help/upsource/2.5/moving-your-upsource-installation-to-another-server.html
I know this question is Linux specific, but if you're running Upsource on Windows... the commands in #Wassim Seifeddine's answer can be modified to the following to accomplish port and/or base-url updates for Upsource as shown below:
[upsource-base-path]/bin/upsource.bat stop
[upsource-base-path]/bin/upsource.bat configure --base-url=http://(server-name).com:(port) --listen-port=(port)
[upsource-base-path]/bin/upsource.bat start

Could not connect to cassandra with cqlsh

I want to connect to cassandra but got this error:
$ bin/cqlsh
Connection error: ('Unable to connect to any servers', {'192.168.1.200': error(10061, "Tried connecting to [('192.168.1.200', 9042)]. Last error: No connection could be made because the target machine actively refused it")})
Pretty simple.
The machine is actively refusing it because your system does not have cassandra running on it. Follow the following steps to completely get rid of this trouble :
Install Cassandra from DataStax (Datastax-DDC; Cassandra version 3).
Go to ~\installation\path\DataStax-DDC\apache-cassandra\bin.
Open up cmd there. (Use Alt+F+P to open it if you are on windows 8 or later).
type cassandra -f this will generate a lot of stuff on the window and you must get the last line as INFO 11:32:31 Created default superuser role 'cassandra'
Now open another cmd window in the same folder.
Type cqlsh
This should give you a prompt, without any error.
I also discovered that this error doesn't pop up if I use cassadra v2.x found here Archived version of Cassandra. I don't know why :( (If you find out please comment).
So, if the above steps do not work, you can always go back to Cassandra v2.x.
Cheers.
Check if you have started Cassandra server, then provide the host and port as the arguments.
$ bin/cqlsh 127.0.0.1 4092
I run into the same problem. This worked for me.
Go to any directory for example E:\ (doesn't have to be the same disc as the cassandra installation)
Create the following directories
E:\cassandra\storage\commitlogs
E:\cassandra\storage\data
E:\cassandra\storage\savedcaches
Then go to your cassandra installations conf path. In my case.
D:\DataStax-DDC\apache-cassandra\conf
Open cassandra.yaml. Edit the lines containing: data_file_directories, commitlog_directory, saved_caches_directory to look like the code below (change paths accordingly to where you created the folders)
data_file_directories:
- E:\cassandra\storage\data
commitlog_directory: E:\cassandra\storage\commitlog
saved_caches_directory: E:\cassandra\storage\savedcaches
Then open the cmd (I did it as administrator, but didn't check if it is necessary) to your cassandra installations bin path. In my case.
D:\DataStax-DDC\apache-cassandra\bin
run cassandra -f
Lots of stuff will be logged to your screen.
You should now be able to run cqlsh and all other stuff without problems.
Edit: The operating system was windows10 64bit
Edit2: If it stops working after a while check if the service is till running using nodetool status. If it isn't follow this instruction.
I also faced the same problem on a Win32 windows 7 machine.
Check if you have JAVA installed correctly and JAVA_HOME variable set.
Once you have checked the java installation and set JAVA_HOME, uninstall Cassandra and install it again.
Hopefully this would solve the problem. Mine was solved after applying the above two steps.
You need to mention host, user, password for cassandra cqlsh connection. Default cassandra cqlsh user is cassandra and password is cassandra.
$ bin/cqlsh <host> -u cassandra -p cassandra
I also had same problem. I applied many methods given on google and youtube but none of them worked in my case. Finally, I applied the following 3 steps and it worked in my case:-
Create a folder without any space in C or D whichever is your system drive. eg:- C:\cassandra
Install Cassandra in this folder instead of installing in"Program Files".
After installation, it will be like this- C:\cassandra\apache-cassandra-3.11.6
Copy python 2.7 installed in bin folder i.e.,C:\cassandra\apache-cassandra-3.11.6\bin
Now your program is ready for work.
There is no special method to connect cqlsh it simple as below:-
$ bin/cqlsh 127.0.0.1(host IP) 9042 or $ bin/cqlsh 127.0.0.1(host IP) 9160 (if older version of Cassandra)
Don't forget to check port connectivity if you are connecting cqlsh to remote host. Also you can use username/password if you enabled by default it is disabled.

Starting Hadoop without ssh'ing to localhost

I've a very tricky situation in my hand. I'm installing Hadoop on few nodes which run Ubuntu 12.04 and our IT guys have created a user "hadoop" for me to use on all the nodes. The issue with this user is that it does not allow ssh on localhost because of some security constraints. So, I'm not able to start Hadoop daemons at all.
I can connect to itself using "ssh hadoop#hadoops_address" but not using loopback address. I also cannot make any changes to the /etc/hosts. Is there a way I can tell Hadoop to ssh to itself using "ssh hadoop#hadoops_address" instead of "ssh hadoop#localhost"?
Hadoop reads the hostname from "masters" and "slaves" file which is present inside conf dir,
edit the file and change the value from localhost to hadoops_address.
This should fix your problem.

Resources