I am using the 3.11.2 version with 4 nodes clyster setup. One of my applications was written in java another one in dotnet. When I check the java client's cluster, all nodes have almost the same network traffic. On the other hand at dotnet client's cluster 1 node using, for example, 200MB traffic other 3 only 3-5MB. Java cluster configured for cache, dotnet cluster using map.
How can I fix this?
PS : I know 3.11.2 is old version but I don't like to upgrade it unless I hit a pug which force me to do it.
Mehmet
Related
I have some confusion on AKS Node pool upgrades and Patching. Could you please clarify on this.
I have one AKS node pool, which is having 4 nodes, so now I want to upgrade the kubernetes version only in two nodes of node pool. Is it possible?
if it is possible to upgrade in two nodes, then how we can upgrade remaining two nodes? and how we can find out which two nodes are having old kubernetes version instead of latest kubernetes version
While doing the Upgrade process, will it create two new nodes with latest kubernetes version, and then will it delete old nodes in node pool?
Actually azure automatically applies patches on nodes, but will it creates new nodes with new patches and deleted old nodes?
1. According to the docs:
you can upgrade specific node pool.
So the approach with additional node-pool mentioned by 4c74356b41.
Additional info:
Node upgrades
There is an additional process in AKS that lets you upgrade a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates.
An AKS upgrade performs the following actions:
A new node is deployed with the latest security updates and Kubernetes version applied.
An old node is cordoned and drained.
Pods are scheduled on the new node.
The old node is deleted.
2. By default, AKS uses one additional node to configure upgrades.
You can control this process by increase --max-surge parameter
To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value.
3. Security and kernel updates to Linux nodes:
In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu image, with the OS configured to automatically check for updates every night. If security or kernel updates are available, they are automatically downloaded and installed.
Some security updates, such as kernel updates, require a node reboot to finalize the process. A Linux node that requires a reboot creates a file named /var/run/reboot-required. This reboot process doesn't happen automatically.
This tutorial summarize the process of Cluster Maintenance and Other Tasks
no, create another pool with 2 nodes and test your application there. or create another cluster. you can find node version with kubectl get nodes
it gradually updates nodes one by one (default). you can change these. spot instances cannot be upgraded.
yes, latest patch version image will be used
In our java application version 1 which is using cassandra 2.1
At startup we are executing query : "*SELECT * from system.schema_keyspaces;*" to get keyspace info (if this fails application wont start)
However new code we are getting the keypspace information from driver's cluster.metadata instance which is using cassandra 3.11
We are using DC aware RoundRobin load balancing policy of java
datastax driver.
Now consider a scenario with upgrade of 3 nodes : A,B and C, where A is upgraded (new application + Cassandra 3.11), upgrade on B is under process (Cassandra is down here) and C is not upgraded (old application + Cassandra 2.1). and your client application on C node restarts.
I am getting InvalidQueryException if the old query present on java client of C node gets executed on A (as client will send query in round robin way). if it fails there is no handling in old application. How can we resolve this issue ?
com.datastax.driver.core.exceptions.InvalidQueryException: un-configured table schema_keyspaces
One way i figured out that remove A's Ip from contact points of client application + peers table on C Cassandra node . Now Restart the client application. and then Cassandra to restore peers table entry.
Other way is keep restarting the client application on C until client application query actually hit the Cassandra 2.1 and successfully restarts. But that seems ugly to me.
In your application it's better to explicitly set protocol version to match Cassandra 2.1, instead of trying to rely on auto-negotiation features. Driver's documentation explicitly says about this.
According the compatibility matrix you need to explicitly set protocol version to V3, but this also depends on the driver version, so you may need to stuck to version 2.
Cluster cluster = Cluster.builder()
.addContactPoint("xxxx")
.withProtocolVersion(ProtocolVersion.V3)
.build();
After upgrade to 3.11 is done you can switch to protocol version 4.
I am trying to deploy Cassandra on a Linux Based HPC cluster and I need some guidelines if possible. Specifically, what is the difference between running Cassandra locally and in cluster.
When managing locally (in which case it runs smoothly) we duplicate the original files for every node inside our Cassandra directory and we apply the appropriate changes for IP address, rcp, JMX etc... however, when managing a network which files do we need to install in each node. The whole package with all the files or just some of the required ones
like, bin/cassandra.in.sh, conf/cassandra.yaml, bin/cassandra.
I am a little bit confused on what to store in each node separately so to start working on the cluster.
You need to install Cassandra on each node (VM), i.e. the whole package and then update config files as neccessary. As described here to configure cluster in a single data center you need:
Install Cassandra on each node
Configure cluster name
Configure seeds
Configure snitch, if needed
We have a staging environment which runs a one node cluster completely separate from our production environment. What I'd like to do is copy this one node cluster over to a test machine that I have for the sole purpose of testing.
What is the correct way to do this? The server and test server are running Centos 6.x, and the version of DSE is 4.5.1 and Cassandra 2.0.8.39
All you need is to follow the steps described in this document:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_backup_restore_c.html.
If your test cluister's topology is different from the original cluster then you will need to use a tool like a sstableloader:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_move_cluster.html
http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsBulkloader_t.html
I have 4 windows machines, On which i have installed hadoop on 3 out of 4.
One machine having the Harton work Sandbox ( say for 4-th machine) , Now i need to make the 4th machine as my server ( Name node )
and rest of the machine as slaves.
Whether it will work if i update the configuration files in the rest of 3 machines
Or is there any other way to do this ?
Any other suggestions ?
Thanks
finally i got this but i could not find any help
Hadoop cluster configuration with Ubuntu Master and Windows slave
A non-secure cluster will work (non-secure in the sense that you do not enable Hadoop Kerberos based auth and security, ie. hadoop.security.authentication is left as simple). You need to update all nodes config to point to the new 4th node as the master for various services you plan to host on it. You mention namenode, but I assume you really mean to make the 4th node the 'head' node, meaning it will probably also run resourcemanager and historyserver (or the jobtracker for old-style Hadoop). And that is only core, w/o considering higher level components like Hive, Pig, Oozie etc, and not even mentioning Ambari or Hue.
Doing a post-install configuration of existing Windows (or Linux, makes no difference) nodes is possible, editing the various xx-site.xml files. You'll have to know what to change and is not trivial. Probably it would be much easier to just deploy again the windows machines, with an appropriate clusterproperties.txt config file. See Option III - Manual Install One Node At A Time.