What should be the upgrade path of Apache Cassandra 2.0.11 to 3.11 (latest)?
My node ring is 4 machines with around 400 GB.
you can upgrade from Cassandra 2.1.9 (or higher) to Cassandra 3.11 (or higher).However here you may need to go to through with intermediate upgrade.
2.0.11 -> 2.2.* -> 3.11
Steps for upgrade cassandra version
1.Take snapshots on each node
2.Run nodetool drain
3.Stop cassandra services.
4.Back up your Cassandra configuration files from the old installation to safe place.
5.Update java version to 8 if required
6.Install the binaries (via tarball, apt-get, yum, etc...) for the apache Cassandra.
Configure the new product.
Compare, merge and update any modifications you have previously made into the new configuration files for the apache version (cassandra.yaml, cassandra-env.sh, etc.).
7.Start the cassandra services.
8.Check the logs for warnings, errors, and exceptions. tail -f /var/logs/cassandra/system.log # or path where you set your logs.
Run nodetool upgradesstables
9."nodetool upgradesstables" (The upgradesstables step can be run on each node after the nodes done with migration.)
Check the logs for warnings, errors, and exceptions. tail -f /var/logs/cassandra/system.log # or path where you set your logs.
10.Check cassandra version "nodetool version".
Related
It is easy to find Cassandra version of running Cassandra node. How to find the Cassandra version of a Cassandra node which is down or not started? Is there any file in which version is mentioned and which we can see?
you should have file lib/apache-cassandra-<version>.jar file in the Cassandra directory. You can also look into first line of the NEWS.txt file - it should have a version of the current version.
I have a Cloudera CDH 5.11 cluster installed from RPM packages (we don't want to use Cloudera Manager or parcels). Has anyone found/built Spark 2 RPM packages for CDH? It seems Cloudera only ships Spark 2 as parcels.
You won't. For now, the doc "Spark 2 Known Issues" clearly states:
Package Install is not Supported
The Cloudera Distribution of Apache Spark 2 is only installable as a parcel.
https://www.cloudera.com/documentation/spark2/latest/topics/spark2_known_issues.html#ki_package_install
The best way is to use Spark on Yarn instead of using Spark Master/Worker. You are free to use any Spark version you like, independent of what the vendor ships.
What you need to do is to package Spark History Server to be able to look at jobs after they finishes. And, if you want to use Dynamic Allocation, you need Spark Shuffle Service configured in Yarn.
Looks like I can't comment on an issue so excuse this post as an answer.
Is it possible to install the Spark2 parcel on a RPM installed cluster using CM?
From CDH 6.0 Spark 2 is included as RPMs. Problem solved.
I'm trying to upgrade a single-node Cassandra cluster from 1.1.5 to 2.0.x.
My production server is running on Linux. I pulled the data folder to my Windows box, keeping the system keyspace, along with a particular one I'm interested in, and dropping the rest after getting Cassandra up.
I upgraded and tested:
1.1.5 -> 1.2.0
1.2.0 -> 1.2.8
1.2.8 -> 1.2.9
Ran:
nodetool upgradesstables
describe schema
select * from table limit 100
Everything looks good with 1.x versions.
When trying to upgrade to 2.0.7, I run into an issue (I saw the recommended upgrade path is 1.2.9 -> 2.0.7):
INFO 16:43:01,758 Opening C:\path\mykeyspace-mytable-ic-655 (97902117 bytes)
ERROR 16:43:12,443 Exception encountered during startup
java.lang.RuntimeException: Incompatible SSTable found. Current version jb is unable to read file: C:\path\mykeyspace\mytable\mykeyspace-mytable.mytable_location_idx-he-647. Please run upgradesstables.
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:409)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:391)
at org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.init(AbstractSimplePerColumnSecondaryIndex.java:52)
at org.apache.cassandra.db.index.SecondaryIndexManager.addIndexedColumn(SecondaryIndexManager.java:292)
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:277)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:415)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:386)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309)
at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:110)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:88)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:290)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:480)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:569)
I did run upgradesstables from 1.2.9/bin, after starting 1.2.9/bin/cassandra.
Any idea what's wrong?
mytable_location_idx-he-647 is a 1.1.5 sstable (he is the version, h is 1.1 and e is the 5th version of h). Run upgradesstables again and verify all the sstables get migrated. the version of the sstable should start with an i for 1.2, and you want it to be at ic before upgrading to 2.0.
The problem lied in the fact that I had only partially migrated my production cluster to my local environment. I had copied the entire system keyspace files and only some of the data files for just one of my keyspaces.
I fixed the problem by redoing everything:
set up 1.1.5 locally
used cqlsh to connect to it, dropped all other keyspaces and tables I didn't have available locally
migrated (not sure which ones did the trick, assume I did them all): 1.1.5 -> 1.2.0 -> 1.2.8 -> 1.2.9 -> 2.0.0 -> 2.0.1 -> 2.0.7.
possibly deleted any mismatching index files Cassandra was complaining about when starting up (not sure if I encountered any when doing the last magically working upgrades, but this let me advance through versions during previous attempts). My guess is Cassandra looks at all of them, but upgradesstables occasionally leaves some behind.
my Cassandra version is 1.2.4 and i'm trying to upgrade it to 2.0.5 and i know that in the first part i have to upgrade it to 1.0.14 and after that try to upgrade it to the 2.0.5 ,
when I try to run nodetool -h localhost removenode Host ID it gives me :
Exception in thread "main" java.lang.UnsupportedOperationException: Cannot remove self
at org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3199)
.....
before running that command i tried nodetool upgradesstables
what is the problem ? and how can i resolve it ?
OS:Ubuntu 12.04 lts
UPDATE
Download the versions 1.2.13 and 2.0.5 of Cassandra from the official site . Unpack . Configure cassandra.yaml in both versions downloaded Cassandra. Take as a basis the existing (old) version cassandra.yaml.
Make snapshot for old Cassandra: nodetool snapshot.
Stop recording unit (reading will continue to work): nodetool drain.
Stop old Cassandra.
Copy the data from the current (old) Cassandra in a new version 1.2.13. Run it (1.2.13).
Perform for Cassandra 1.2.13 update command table format: nodetool upgradesstables-a.
Copy data from Cassandra 1.2.13 to Cassandra 2.0.5.
FINE POINTS
In 2.0.5 are included by default virtual hosts - vnodes (record "num_tokens: 256" in cassandra.yaml).
In 2.0.5 record "index_interval: 128" factored out of the file cassandra.yaml to the level properties of the table.
In 2.0.5 some settings from previous versions of cassandra.yaml are absent.
I am using cassandra 2.0.5 on Centos 6.5 and OpsCenter 4 worked fine until i updated OpsCenter to version 4.1 . I access OpsCenter page, click on manage existing cluster and give the ip address of my node (127.0.0.1) and it gives me the following: "Error creating cluster: max() arg is an empty sequence".
Any clues ?
The bug is on 4.1.0, and is affecting those running Python 2.6. The complete fix for this is 4.1.1 (http://www.datastax.com/dev/blog/opscenter-4-1-1-now-available). To workaround this issue on 4.1.0, users should disable the auto-update feature, and manually re-populate the latest definitions. This will only need to be done once. This doesn't need to be done with 4.1.1, and that's the best fix. See the Known issues of the release notes (http://www.datastax.com/documentation/opscenter/4.1/opsc/release_notes/opscReleaseNotes410.html)
Add the following to opscenterd.conf to disable auto-update:
[definitions]
auto_update = False
Manually download the definition files
for tarball installs:
cd ./conf/definitions
for packages installs:
cd /etc/opscenter/definitions
Apply the latest definitions
curl https://opscenter.datastax.com/definitions/4.1.0/definition_files.tgz | tar xz
Restart opscenterd
I jus had today the same problem that you. I downloaded an older versions of opscenter (particulary version 4.0.2) from http://rpm.datastax.com/community/noarch/ and the error has gone.
I am also using the sam cassandra version and also on centos