Unable to get response from grid gain on jenkins while unlocking the nodes - gridgain

While running jobs thorugh jenkins server, which basically sending request to one of the grid gain node for tests execution. The tests gets executed successfully, but when its trying to unlock the nodes it gets hanged. Its not getting respons from grid gain. No detail logs either on grid gain or jenkins. It started happening the time I upgraded jenkins and java to 1.7. The tomcat and grid gain kept on existing (old) version.
Gridgain ver: 2.1.1
Apache Tomcat: apache-tomcat-6.0.24
Jenkins: 1.549

GridGain v.2.1.1 was not tested against Java 1.7, it also has been a while since that version was released. You should upgrade to the latest version of GridGain which was released under Apache 2.0 license (current GridGain version is 6.0.1)

Related

dotnet client's traffic not similar while using hazelcast

I am using the 3.11.2 version with 4 nodes clyster setup. One of my applications was written in java another one in dotnet. When I check the java client's cluster, all nodes have almost the same network traffic. On the other hand at dotnet client's cluster 1 node using, for example, 200MB traffic other 3 only 3-5MB. Java cluster configured for cache, dotnet cluster using map.
How can I fix this?
PS : I know 3.11.2 is old version but I don't like to upgrade it unless I hit a pug which force me to do it.
Mehmet

Production ready version of apache cassandra

I currently have a production cassandra cluster running on apache cassandra version 3.9. However I have hit a bug that prevents me from being able to bootsrap new nodes. https://issues.apache.org/jira/browse/CASSANDRA-12813. This issue is fixed as of version 3.11. Given I am going to have to upgrade, I'd like to know the best production ready version that I should adopt as of the present.
If your on 3.9 you should go to the latest 3.11 build (3.11.2 to date). Its all bug fixes between 3.9 and it, some of which are serious so you should upgrade.

upgrading cassandra with DataStax Lifecycle manager

The DataStax Opscenter LifeCycle Manager only seems to have an option to run an 'install' job. Looking at the language, it seems to be only to provision new nodes.
Can LifeCycle Manager be used to upgrade existing (managed) clusters to newer version of Datastax enterprise?
Edit 2018-05 OpsCenter 6.5.0 has been released and provides assistance in the process of upgrading DSE between patch releases... aka going from DSE 5.0.3 to 5.0.6. Docs and https://docs.datastax.com/en/opscenter/6.5/opsc/LCM/opscLCMjobsOverview.html and https://docs.datastax.com/en/opscenter/6.5/opsc/LCM/upgradeDSEjob.html.
DataStax engineer here, I work on Lifecycle Manager. Currently LCM cannot help you upgrade nodes, and while I'm not able to share information about future roadmap and unreleased features, I can say we know that customers want to use LCM for upgrades and we agree that it would be a valuable feature.
As of OpsCenter 6.1.x, you must manually upgrade your nodes, and then update your LCM configs to match the new versions. From that point onward you can use LCM for install/config jobs in the upgraded cluster. This isn't a detailed howto, but broadly:
Review the upgrade guide so you know what needs to be done: https://docs.datastax.com/en/upgrade/doc/upgrade/datastax_enterprise/upgrdDSE.html
Perform the upgrade manually, outside of LCM. Note that if you use apt to manage packages, and are not upgrading the very most recent version available, you'll have to use a pretty giant apt-command in order to work around a dependency resolution in apt when upgrading to an "old" version. The resulting command will look something like: apt-get install -y -qq -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold dse-pig=5.0.11-1 dse-libhadoop2-client=5.0.11-1 dse-libspark=5.0.11-1 dse-libhadoop-native=5.0.11-1 dse-libmahout=5.0.11-1 dse-hive=5.0.11-1 dse-libpig=5.0.11-1 dse-libsolr=5.0.11-1 dse-libgraph=5.0.11-1 dse-libtomcat=5.0.11-1 dse-libhadoop=5.0.11-1 dse-libhive=5.0.11-1 dse-full=5.0.11-1 dse-libcassandra=5.0.11-1 dse=5.0.11-1 dse-libsqoop=5.0.11-1 dse-libhadoop2-client-native=5.0.11-1 dse-liblog4j=5.0.11-1
Once the manual upgrade is complete, you'll temporarily be in a position where LCM jobs cannot run successfully, since the version of DSE installed does not match the version of DSE that LCM is configured to deploy. At this point LCM jobs will fail with a DSE version mismatch error. To fix this, proceed...
Clone your configuration profile (which is associated with your old dse version) to a new CP using the new DSE version. If you're doing a patch upgrade, this will be pretty simple. If you're doing a major upgrad via the API, you need to be very careful to remove config parameters that DSE no longer supports.
Edit your cluster model so that the cluster, plus any datacenters or nodes with CP's defined use your newly cloned CP for your current datastax version instead of your old cp for your old datastax version. At this point, you can brought LCM back into sync with your cluster. You can proceed to run install/configure jobs again.
Not a simple procedure, but it is possible to upgrade your cluster outside of LCM and then sync lcm up with the new config so you can continue managing it from there. As previously noted, we understand this is not a simple process and understand that there's significant value in providing LCM upgrades natively.

Upgrading from gridgain to Apache Ignite

We're currently running gridgain 6.2.1. Is there an existing upgrade guide in order to transition to apache ignite?
There is no such guide and it highly depends on what parts of GridGain you're using. All functionality that existed in 6.x was migrated to Ignite with a bit different API. So I suggest to update the version and start fixing compilation step by step.

memsql aggregator doesn't start on CentOS 6.7

We're currently evaluating memsql and have two setups. One is running on CentOS 6.7, one on CentOS 7.1.
While using CentOS 7.1, after a system reboot the master has all services started, but the CentOS 6.7 variant does not and shows that the aggregator is offline. We had to run memsql-ops cluster-start found in MemSql leaf down on Single server Cluster. We're wondering if this is related to the init.d/systemctl diffs on the machines. Any reply appreciated!
Cheers,
µatthias
currently Ops only sets up a Sys-V style init script in /etc/init.d when it is installed by root. However, once Ops starts up correctly it should immediately check whether or not MemSQL is running. If it is not running but it should be Ops will start the cluster automatically. Can you confirm that you didn't run memsql-ops cluster-stop before shutting down the cluster? If you do that, when Ops comes back up it will not start the MemSQL cluster.

Resources