Jhipster migration from Mysql database to Cassandra - jhipster

I have created an application from jhipster which is microservice based. I have gateway registry and application which is running on MySql database. Now I want to migrate from MySQL to Cassandra database. Can you please let me know the changes I need to make in java/class files and list of file. So far I made some changes like application-dev yml. I have this application on hibernate, ehcache etc. Changed the class name in cacheconfiguration.java. Please let me know anyother changes required to run on database.
Regards
Satyanvesh

You can use the same Java POJO classes to annotate with Spring-Cassandra or datastax Cassandra annotations. If you want to migrate your data as well export your data tables as csv from MySQL and use the COPY command in cql to copy from csv to Cassandra.

Related

Pyspark: How to setup multiple JDBC connections?

Usecase: I have two databases, one for prod and one for dev. The prod uses an SAP JDBC driver, and the dev uses an Oracle JDBC driver as they are based on different DB's. I have to fetch data from prod DB, perform few operations and save it in dev DB for few project needs.
Issue: Currently am using these third-party drivers by setting "spark.driver.extraClassPath" in Spark Context. But this takes in only one argument. Thus, I am able to connect to only one of the DB's at a time.
Is there are any way I can make two different JDBC class path configuration? If not, then how can I approach this issue? Any guidance is much appreciated!!
Solution:
Instead of defining the driver file path, providing the folder path loads all drivers in that folder. So, in my case, I placed both the SAP and Oracle JDBC drivers in same folder and mentioned it in the Spark Context Configuration like shown in the below snippet.
.set("spark.driver.extraClassPath", r"<folder_path_jdbc_drivers>\*")

Databricks Lakehouse JDBC and Docker

Pretty new to Databricks.
I've got a requirement to access data in the Lakehouse using a JDBC driver. This works fine.
I now want to stub the Lakehouse using a docker image for some tests I want to write. Is it possible to get a Databricks / spark docker image with a database in it? I would also want to bootstrap the database on startup to create a bunch of tables.
No - Databricks is not a database but a hosted service (PaaS). You can theoretically you can use OSS Spark with Thriftserver started on it, but the connections strings and other functionality would be very different, so it makes no sense to spend time on it (imho). Real solution would depend on the type of tests that you want to do.
Regarding bootstrapping database & create a bunch of tables - just issue these commands, like, create database if not exists or create table if not exists when you application starts up (see documentation for an exact syntax)

How to handle the errors while data migration to cassandra

trying to migrate the data from oracle to cassandra i have below issues:
How to handle the errors while data migration to cassandra using spark-sql?
How to design the retry machanisum if anything fails ?
Is there any document/sample/github regarding the same.
~Sha
you can look into these below github repos for reference
https://github.com/snazy/Oracle_to_Cassandra
https://github.com/AlexGruPerm/oratocass

Cassandra : can we take node's backup using CSharp Code on windows environment?

I have installed Cassandra CQL Client on window 10.but i want to take cassandra node's backup file from CDATA ADO.NET code, which will be stored on specific directory.
I Need help with code.
There is no way to take node's backup from Cassandra ADO.net drivers. We can achieve this scenario by Cassandra Snapshots for nodes backup.
onfluence.atlassian.com/bitbucketserver/basic-git-commands-776639767.html

Quartz Scheduler with Cassandra NoSQL database for Cluster failover mechanism

I am using Weblogic 12c in a cluster environment,
I am using Cassandra for database operations.
My requirement is to execute a batch which picks up records from DB, process them and upload to a webservice.
For this I am looking at Quartz JDBCJobStore implementation.
For normal sql database we can achieve it by JDBC JobStore
However I am struggling to get on how to implement on Cassandra
I want to create JDBC JobStore for NoSQL database like Cassandra
Any help would be great on this.
It would be helpful if some example implementation of quartz.properties and table script is given
Update: This delivers an answer for a SQL DB. I want the same for NoSQL DB

Resources