How do I start 2 nodes in the elastic-search cluster? - node.js

I can't find good information on making 2 node servers running in elastic-search cluster.
I just want a clear explanation of what commands I have to run or what I have to change in the elastic.yml config file.

Assuming you're not using automatic discovery:
identical for all nodes
cluster.name: My-Cluster-Name
specific per node
node.name: [node name]
network.host: [hostname]
http.port: 92xx
transport.tcp.port: 93xx
(If they run on the same host, they need different ports. If they run on two different hosts, you can use the same port numbers)
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["server1:9300","server2:9300","server2:9301"]
Here you list all the nodes in your cluster with host:port
discovery.zen.minimum_master_nodes: 2
#This must be set to minimum (number of nodes/2) + 1 to avoid "split brains". I.e. for two or three nodes you would set it to 2
discovery.zen.fd.ping_timeout: 600s
Not mandatory, but if the network has issues you don't want the cluster to start panicking too quickly
The important points are that if you don't use multicast discovery, all the nodes need to know the host and port of all other nodes, and this has to be set in the elasticsearch.yml files. If you only use one node per host you can use the default 9200 and 9300 ports.
Once the nodes are set up you just start them all and check the output. they should log that they have found the other nodes, and you can view the active nodes using the _cat/nodes API:
http://server1:9200/_cat/nodes?v&h=id,ip,port,v,m,d,fdp,r,get.current,n,u
Each node should have its own copy of the elasticsearch software and will store its own data in the /data folder.

Related

Spark specify ssh port for worker nodes

I'm trying to set up a local Spark cluster. When I add the IP addresses of the workers to spark/conf/workers it tries to ssh into them on the default port 22 when I run sbin/start-all.sh. I have my ssh ports set differently for security reasons. Is there an option I can use to configure spark to use alternate ports for ssh from master to workers, etc?
You should add the following option for /path/to/spark/conf/spark-env.sh
# Change 2222 to whatever port you're using
SPARK_SSH_OPTS="-p 2222"

How to verify a CouchDB 2.0 cluster setup

I just set my three CouchDB instances as a cluster, this is how I did when I set it up:
Add "-kernel inet-dist-listen-minimum/maxinum" from 9100 to 9200 to the vm.args file. and shut down the firewall
Set three couchdb instanes' using the same admin and passwords.
Change binding address to 0.0.0.0 for both chttpd and httpd section in Fauxton
Choos one of the couchdb instance to set up as cluster then add two nodes (by entering their ip address)
All done
After these steps, I believe the cluster should be set up properly, however, when I ran the command
curl http://username:login#localhost/_membership
on three VMs, only the main one of the three nodes showed it had three members in the cluster( nodes).
This is what it looks like when in http://localhost:9000/_membership (It's a ssh tunnel to connect to the port 5984 from my computer) :
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#130.56.252.xxx","couchdb#130.56.252.xxx","couchdb#localhost"]}
And this is what the other instances show:
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#localhost"]}
So now I have got two questions:
Did I set the cluster it correctly?
How can I tell if I set it properly?

Joining a node to a cluster

I have tried to do necessary configuration to deploy multiple instances of Cassandra on 2 different nodes of multi-node cluster. But the nodes are having trouble seeing each other. Can someone give me an advice how to join a node to my cluster?
To join a node to a cluster, the following need to match-up in the nodes' cassandra.yaml files:
cluster_name
endpoint_snitch
num_tokens
Get your first node running, and make sure the following ports are open on your firewall or internal network:
7000 (gossip)
7001 (if using node-to-node SSL)
7199 (JMX)
9042 (client connections)
On your second node, make sure the second node has the first node's IP address in its seed list. All of your nodes should share the same seed list, as well. Depending on the size of your cluster, you should have two or three per data center.
Example:
# seeds is actually a comma-delimited list of addresses.
- seeds: "192.168.0.100,192.168.0.101"
Once your seed nodes are set, fire-up your second node and it should join. If it doesn't check the system.log for errors.

How to create a Cassandra node Cluster in Windows7 pcs?

Problem while Creating cluster using cassandra:-
I follow the below steps for creating a Cassandra cluster:-
1.Installed cassandra in 3 Windows-7 PCs.
PC IPAddress-127.0.0.1,127.0.0.2,127.0.0.3
2.Modified the cassandra.yaml file as below.
cluster_name: 'MyCluster'
num_tokens: 256
seed_provider: - seeds: "127.0.0.1"
listen_address: 127.0.0.1,127.0.0.2,127.0.0.3
rpc_address: 0.0.0.0
3.start the Cassandra in all the above 3 PCs.
but I am not getting more than 1 Node in Node tool. I can see only one node.
I have installed datastax-community-64bit_2.0.3
So, Please help me to solve this problem. I need to create Cassandra cluster.
Thanks in Adavance,
Satya
You have follow all the steps for creating cluster,but you have to define firewall inbound rule for allowing the port.(by default these ports are not allowed by firewall).So after modifying the "cassandra.yaml" file,you have to set the port used in cassandra and datastax in your firewall,then start the cassandra service.
For adding port in firewall:
go to control panel->windows firewall->advanced setting->Inbound Rules->new Rule->select port and add all the ports require for the cassandra/datastax. google the require ports for cassandra/datastax.
The 127.x.x.x IP addresses you are using are all loopback device addresses. Traffic sent to those addresses never leaves your localhost. If you want the three hosts to discover each other you need to use IPs from some private IP address range. See this Wikipedia article for an overview. As your Windows workstations are networked, the IP number to use should be obvious from running ipconfig on the command line on each of the three workstations. Look for the output entry IPv4-Adress.

Cassandra big cluster configure the client connection

I've been looking to find how to configure a client to connect to a Cassandra cluster.
Independent of clients like Pelops, Hector, etc, what is the best way to connect to a multi-node Cassandra cluster?
Sending the string IP values works fine, but what about growing number cluster nodes in the future? Is maintaining synchronically ALL IP cluster nodes on client part?
Don't know if this answer all your questions but the growing cluster and your knowledge of clients ip are not related.
I have a 5 node cluster but the client(s) only knows 2 ip addresses: the seeds. Since each machine of the cluster knows about the seeds (each cassandra.yaml contains the seeds ip address) if new machine will be added information about new one will come "for free" on the client side.
Imagine a 5 nodes cluster with following ips
192.168.1.1
192.168.1.2 (seed)
192.168.1.3
192.168.1.4 (seed)
192.168.1.5
eg: the node .5 boot -- it will contact the seeds (node 2 and 4) and receive back information about the whole cluster. If you add a new 192.168.1.6 will behave exactly like the .5 and will point to the seeds to know the cluster situation. On the client side you don't have to change anything: you will just know that now you have 6 endpoints instead of 5.
ps: you don't have necessarily to connect to the seeds you can just connect to any node of since after having contacted the seeds each node knows the whole cluster topology
pps: it's your choice how many nodes to put in you "client known hosts", you can also put all 5 but this won't change the fact that if one node will be added you don't need to do anything on the client side
Regards,
Carlo
You will have an easier time letting the client track the state of each node. Smart clients will track endpoint state via the gossipinfo, which passes on new nodes as they appear in the cluster.

Resources