How to verify a CouchDB 2.0 cluster setup - couchdb

I just set my three CouchDB instances as a cluster, this is how I did when I set it up:
Add "-kernel inet-dist-listen-minimum/maxinum" from 9100 to 9200 to the vm.args file. and shut down the firewall
Set three couchdb instanes' using the same admin and passwords.
Change binding address to 0.0.0.0 for both chttpd and httpd section in Fauxton
Choos one of the couchdb instance to set up as cluster then add two nodes (by entering their ip address)
All done
After these steps, I believe the cluster should be set up properly, however, when I ran the command
curl http://username:login#localhost/_membership
on three VMs, only the main one of the three nodes showed it had three members in the cluster( nodes).
This is what it looks like when in http://localhost:9000/_membership (It's a ssh tunnel to connect to the port 5984 from my computer) :
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#130.56.252.xxx","couchdb#130.56.252.xxx","couchdb#localhost"]}
And this is what the other instances show:
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#localhost"]}
So now I have got two questions:
Did I set the cluster it correctly?
How can I tell if I set it properly?

Related

Spark specify ssh port for worker nodes

I'm trying to set up a local Spark cluster. When I add the IP addresses of the workers to spark/conf/workers it tries to ssh into them on the default port 22 when I run sbin/start-all.sh. I have my ssh ports set differently for security reasons. Is there an option I can use to configure spark to use alternate ports for ssh from master to workers, etc?
You should add the following option for /path/to/spark/conf/spark-env.sh
# Change 2222 to whatever port you're using
SPARK_SSH_OPTS="-p 2222"

How do I start 2 nodes in the elastic-search cluster?

I can't find good information on making 2 node servers running in elastic-search cluster.
I just want a clear explanation of what commands I have to run or what I have to change in the elastic.yml config file.
Assuming you're not using automatic discovery:
identical for all nodes
cluster.name: My-Cluster-Name
specific per node
node.name: [node name]
network.host: [hostname]
http.port: 92xx
transport.tcp.port: 93xx
(If they run on the same host, they need different ports. If they run on two different hosts, you can use the same port numbers)
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["server1:9300","server2:9300","server2:9301"]
Here you list all the nodes in your cluster with host:port
discovery.zen.minimum_master_nodes: 2
#This must be set to minimum (number of nodes/2) + 1 to avoid "split brains". I.e. for two or three nodes you would set it to 2
discovery.zen.fd.ping_timeout: 600s
Not mandatory, but if the network has issues you don't want the cluster to start panicking too quickly
The important points are that if you don't use multicast discovery, all the nodes need to know the host and port of all other nodes, and this has to be set in the elasticsearch.yml files. If you only use one node per host you can use the default 9200 and 9300 ports.
Once the nodes are set up you just start them all and check the output. they should log that they have found the other nodes, and you can view the active nodes using the _cat/nodes API:
http://server1:9200/_cat/nodes?v&h=id,ip,port,v,m,d,fdp,r,get.current,n,u
Each node should have its own copy of the elasticsearch software and will store its own data in the /data folder.

How to create a Cassandra node Cluster in Windows7 pcs?

Problem while Creating cluster using cassandra:-
I follow the below steps for creating a Cassandra cluster:-
1.Installed cassandra in 3 Windows-7 PCs.
PC IPAddress-127.0.0.1,127.0.0.2,127.0.0.3
2.Modified the cassandra.yaml file as below.
cluster_name: 'MyCluster'
num_tokens: 256
seed_provider: - seeds: "127.0.0.1"
listen_address: 127.0.0.1,127.0.0.2,127.0.0.3
rpc_address: 0.0.0.0
3.start the Cassandra in all the above 3 PCs.
but I am not getting more than 1 Node in Node tool. I can see only one node.
I have installed datastax-community-64bit_2.0.3
So, Please help me to solve this problem. I need to create Cassandra cluster.
Thanks in Adavance,
Satya
You have follow all the steps for creating cluster,but you have to define firewall inbound rule for allowing the port.(by default these ports are not allowed by firewall).So after modifying the "cassandra.yaml" file,you have to set the port used in cassandra and datastax in your firewall,then start the cassandra service.
For adding port in firewall:
go to control panel->windows firewall->advanced setting->Inbound Rules->new Rule->select port and add all the ports require for the cassandra/datastax. google the require ports for cassandra/datastax.
The 127.x.x.x IP addresses you are using are all loopback device addresses. Traffic sent to those addresses never leaves your localhost. If you want the three hosts to discover each other you need to use IPs from some private IP address range. See this Wikipedia article for an overview. As your Windows workstations are networked, the IP number to use should be obvious from running ipconfig on the command line on each of the three workstations. Look for the output entry IPv4-Adress.

2 Weblogic cluster in the same network

we have 2 servers (dev/int), one of this has 3 Weblogic clusters with one managed server and different mutlicast addresses.
server 1 has the multicast addresses 239.192.3.7 and 239.192.3.8 and 239.192.3.9 for ione of his cluster
server 2 has the multicast addresses 239.192.4.7 and 239.192.4.8 and 239.192.4.9 for ione of his cluster
The admin and managed servers starts without errors. The managed servers connect to his clusters and it looks well.
Both server are in the same network (a.b.c.d/24) and connected to the same switch.
If I deploy a service to one of this cluster e.g. 239.192.3.7, I received an timeout. With netstat I see connection to the other cluster from server 2 (239.192.4.7). In the log of this cluster (server2), I saw the try of service deployment from server1. So after I stopped the clusters of server2, I can deploy the service on server1 without any trouble.
Where is the problem? To much multicast addresses in one network?
So maybe anybody can help me, thanks!
EDIT (10.05.2013):
Some days ago I take a new install of this server with his 3 cluster configurations. Maybe I had a mistake in my configuration.
In this new installation I had the same error, now I looked again on server2 with netstat -la --numeric-ports and see two connections to the another server1. It look like this:
tcp 0 0 server2:8088 server1:57963 ESTABLISHED
tcp 2 0 server2:7890 server1:34010 ESTABLISHED
Each connection will be created by a start from a managed server. But only this two connections with every same source ports.
I solved the problem by defined a special coherence.clusteraddress in the default start environment.
I add following lines by updating the EXTRA_JAVA_PROPERTIES variable in the setDomainEnv.sh script inside of the bin directory of the soa and osb domain. For server1 and server2 I used different clusteraddresses.
-Dtangosol.coherence.clusteraddress=239.192.4.7 -Dtangosol.coherence.clusterport=31323 -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.log=jdk
-Dtangosol.coherence.clusteraddress=239.192.4.8 -Dtangosol.coherence.clusterport=31324 -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.log=jdk
More information are in this links.
http://redstack.wordpress.com/2012/08/09/making-coherence-play-nice-in-your-test-environment/
http://wiki.tangosol.com/display/COH33UG/Command+Line+Setting+Override+Feature
https:// blogs.oracle.com/ateamsoab2b/entry/coherence_in_soa_suite_11g and the link below of this site.

RabbitMQ Cluster on EC2: Hostname Issues

I want to set up a 3 node Rabbit cluster on EC2 (amazon linux). We'd like to have recovery implemented so if we lose a server it can be replaced by another new server automagically. We can set the cluster up manually easily using the default hostname (ip-xx-xx-xx-xx) so that the broker id is rabbit#ip-xx-xx-xx-xx. This is because the hostname is resolvable over the network.
The problem is: This hostname will change if we lose/reboot a server, invalidating the cluster. We haven't had luck in setting a custom static hostname because they are not resolvable by other machines in the cluster; thats the only part of that article that doens't make sense.
Has anyone accomplished a RabbitMQ Cluster on EC2 with a recovery implementation? Any advice is appreciated.
You could create three A records in an external DNS service for the three boxes and use them in the config. E.g., rabbit1.alph486.com, rabbit2.alph486.com and rabbit3.alph486.com. These could even be the ec2 private IP addresses. If all of the boxes are in the same region it'll be faster and cheaper. If you lose a box, just update the DNS record.
Additionally, you could assign an elastic IPs to the three boxes. Then, when you lose a box, all you'd need to do is assign the elastic IP to it's replacement.
Of course, if you have a small number of clients, you could just add entries into the /etc/hosts file on each box and update as needed.
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of the system. If the hostname changes, a new empty database is created. To avoid data loss it's crucial to set up a fixed and resolvable hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname
#Chrskly gave good answers that are the general consensus of the Rabbit community:
Init scripts that handle DNS or identification of other servers are mainly what I hear.
Elastic IPs we could not get to work without the aid of DNS or hostname aliases because the Internal IP/DNS on amazon still rotate and the public IP/DNS names that stay static cannot be used as the hostname for rabbit unless aliased properly.
Hosts file manipulations via an script are also an option. This needs to be accompanied by a script that can identify the DNS's of the other servers upon launch so doesn't save much work in terms of making things more "solid state" config wise.
What I'm doing:
Due to some limitations on the DNS front, I am opting to use bootstrap scripts to initialize the machine and cluster with any other available machines using the default internal dns assigned at launch. If we lose a machine, a new one will come up, prepare rabbit and lookup the DNS names of machines to cluster with. It will then remove the dead node from the cluster for housekeeping.
I'm using some homebrew init scripts in Python. However, this could easily be done with something like Chef/Puppet.
Update: Detail from Docs
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of
the system. If the hostname changes, a new empty database is created.
To avoid data loss it's crucial to set up a fixed and resolvable
hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname

Resources