I'm trying to set up a local Spark cluster. When I add the IP addresses of the workers to spark/conf/workers it tries to ssh into them on the default port 22 when I run sbin/start-all.sh. I have my ssh ports set differently for security reasons. Is there an option I can use to configure spark to use alternate ports for ssh from master to workers, etc?
You should add the following option for /path/to/spark/conf/spark-env.sh
# Change 2222 to whatever port you're using
SPARK_SSH_OPTS="-p 2222"
Related
We try to connect cluster using bash script using Jupyter notebook :
!gcloud compute --project "project_name" ssh --zone "us-central1-a" "cassandra-abc-m"
After that we try to connect using :
import cql
con= cql.connect(host="127.0.0.1",port=9160,keyspace="testKS")
cur=con.cursor()
result=cur.execute("select * from TestCF")
How to inter-connect both?
Kindly help me for it.
As I understand the question, you are SSHing out to a Google Compute (GCP) instance (running Cassandra) and are then trying to run a Python script to connect to the local node. I see two problems in your cql.connect line.
First, Cassandra does not use port 9160 for CQL. CQL uses port 9042. I find this point confuses people so much, that I recommend not setting port= at all. The driver will use the default, which should work.
Secondly, if you deployed Cassandra to a GCP instance, then you probably changed listen_address and rpc_address. This means Cassandra cannot bind to 127.0.0.1. You need to use the value defined in the yaml's rpc_address (or broadcast_rpc_address) property.
$ grep rpc_address cassandra.yaml
rpc_address: 10.19.17.5
In my case, I need to specify 10.19.17.5 if I want to connect either locally or remote.
tl;dr;
Don't specify the port.
Connect to your external-facing IP address, as 127.0.0.1 will never work.
Is it possible to configure a Jenkins ssh node (slave) to authenticate on a port which is different than 22?
After choosing “Launch method” to be “Launch slave agents via SSH”, it is possible to enter the host on which we want to configure node. However there is no “port” field anywhere around, and syntax “ip:port” doesn’t want to work either.
After setting host field to ip:port, Jenkins tries to connect to ip:port:22
Opening SSH connection to IP:PORT:22.
IP:PORT: invalid IPv6 address
Any tips? Or is it necessary to just stick to using the standard ssh port?
The reason for wanting to use different ssh port is using Docker container on a remote machine.
Jenkins - 2.89 with SSH slaves plugin
If i understood your question correctly, you ignored advanced button on the new node config form via SSH slave plugin.
Adding node -
Click on Advanced & you should be able to define a port via GUI -
I have a spark driver which is connected to my Mesos master. Driver is listening on a particular port to my Mesos master for resource offers
Received SUBSCRIBE call for framework 'Simple kafka application' at scheduler-901ab680-7098-4cb0-ab27-4b293285a2b6#xxx.xx.xx.xxx:57033
I would like to configure this port as I will need to whitelist this port on my machines.
I am not able to figure out which conf will this be. I have configured spark.driver.port and broadcast port but I am pretty sure these are not used in this scenario.
To use custom port for communication with Mesos you need to specify LIBPROCESS_PORT environment variable with port number that should be used. By default it's unset or set to 0 and this cause random port.
export LIBPROCESS_PORT=<PORT>
I just set my three CouchDB instances as a cluster, this is how I did when I set it up:
Add "-kernel inet-dist-listen-minimum/maxinum" from 9100 to 9200 to the vm.args file. and shut down the firewall
Set three couchdb instanes' using the same admin and passwords.
Change binding address to 0.0.0.0 for both chttpd and httpd section in Fauxton
Choos one of the couchdb instance to set up as cluster then add two nodes (by entering their ip address)
All done
After these steps, I believe the cluster should be set up properly, however, when I ran the command
curl http://username:login#localhost/_membership
on three VMs, only the main one of the three nodes showed it had three members in the cluster( nodes).
This is what it looks like when in http://localhost:9000/_membership (It's a ssh tunnel to connect to the port 5984 from my computer) :
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#130.56.252.xxx","couchdb#130.56.252.xxx","couchdb#localhost"]}
And this is what the other instances show:
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#localhost"]}
So now I have got two questions:
Did I set the cluster it correctly?
How can I tell if I set it properly?
Question: How can I allow inbound SSH traffic on a non-standard port when using Amazon Security Groups and also provisioning with Chef?
Amazon EC2: Allow inbound ssh traffic on port 999 instead of 22 by adding this rule to a Security Group.
Custom TCP Rule Port Range: 999
Chef: Create a new server with access to ssh on a non-standard port via:
knife ec2 server create .... -p 999 ....
Ubuntu: Allow ssh access on a non-standard port by editing /etc/ssh/sshd_config
Port 999
Easy enough? Why doesn't it work?
When using knife ec2 server create ... -p 999 ... the instance creates. However it hangs on "Waiting for sshd...". Eventually that errors out. The instance is not available using ssh -p 999 username#ip-address nor ssh -p 22 username#ip-address.
As some of the commenters have said, you are being locked out because the instance is listening on port 22, but your security groups are only allowing TCP connections on port 999.
There is no way to connect to the instance in this state.
There are three solutions to this problem that I can see.
Solution 1: Make a new AMI
Create a new AMI which has sshd configured to listen on 999.
Once you have done so, create your EC2 servers using your custom AMI and everything should work.
There are fancy-pants ways of doing this using cloudinit to let you customize the port later, but it hardly seems worth the effort.
Just hardcode "999" instead of "22" into /etc/ssh/sshd_config
The obvious downside is that for any new AMI that you want to use, you'll have to bake a new base AMI that uses the desired port instead of 22.
Furthermore, this diverges away from the Cheffy philosophy of layering configuration on a base image.
Solution 2: Icky Icky Security Groups
You can hack your way around this by modifying your security groups every time you bring up a new server.
Simply add an exception that allows your machine to SSH into the box for the duration of your bootstrapping process, then remove this from the security group(s) that you are using once you are done.
The downside here is that security groups in EC2 are not dynamic, so you either have to create a new security group for each machine (ick!), or let this open port 22 for your workstation on all of your servers during the bootstrapping window (also ick!).
Solution 3: Tunneling
The last option that I can think of is to allow connections on port 22 amongst your live servers.
You can then tunnel your connection through another server by connecting to it, opening an SSH tunnel (i.e. ssh -L ...), and doing the knife ec2 actions through the tunnel.
Alternatively, you can pass through one of the other servers manually on your way to the target, and not use knife to bootstrap the new node.
The downside here is that you need to trust your servers' security, since you'll have to do some agent forwarding or other shenanigans to successfully connect to the new node.
This is probably the solution that I would choose because it does not require new AMIs, doesn't open port 22 to the outside world, and requires a minimal amount of effort for a new team member to learn how to do.