Validate failed: invalid checksum for migration (Evolve ) - data-migration

I use evolve to automate my database changes and help keep those changes in sync across all my environments and development teams. Before I run the evolve is ok. But I am currently encountering errors in evolve, and the error information shows Validate failed: invalid checksum for migration. Below is the script I use.
C:\Users\HP\Desktop\MywamProject\evolve_2.4.0_Windows-64bit>evolve migrate mysql -c "User Id=root;password=root;Host=localhost;Port=3306;Database=saas_catalogdb;" -l "C:\\Users\\HP\\Desktop\\MywamProject\\mywam.saas.backend.api\\docker-database\\evolve\\catalogdb"
Executing Migrate...
Evolve initialized.
Validate failed: invalid checksum for migration: V120__Insert_into_sa_report_proforma_detail.sql.
Validate failed: invalid checksum for migration: V120__Insert_into_sa_report_proforma_detail.sql.
May I know which part I am getting wrong? Hope someone can guide me on how to solve this problem. Thanks.

You can fix this issue by repair the checksum of already applied migrations. So instead of you put the command as migrate, change it to repair
Example:
evolve repair mysql -c ...the rest of the command you need
Should be like this:
evolve repair mysql -c "User Id=root;password=root;Host=localhost;Port=3306;Database=saas_catalogdb;" -l "C:\\Users\\HP\\Desktop\\MywamProject\\mywam.saas.backend.api\\docker-database\\evolve\\catalogdb"
You can go to this link for more options on the commands and options:
https://evolve-db.netlify.app/configuration/options/

Related

DSBulk CSV Load Failure to DataStax Astra Cassandra Database, missing file config.json

I am trying to load a csv into a database in DataStax Astra using the DSBulk tool.
Here is the command I ran minus the sensitive details:
dsbulk load -url D:\\App\\data.csv -k data -t data -b D:\\App\\secure-connect-myapp -u username -p password
Here is the error I get back:
Operation LOAD_20221206-004421-512000 failed: Invalid bundle: missing file config.json.
Here is the full log:
2022-12-06 00:44:21 INFO Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
2022-12-06 00:44:21 INFO A cloud secure connect bundle was provided: ignoring all explicit contact points.
2022-12-06 00:44:21 INFO A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
2022-12-06 00:44:21 INFO Operation directory: C:\Program Files\dsbulk-1.10.0\bin\logs\LOAD_20221206-004421-512000
2022-12-06 00:44:21 ERROR Operation LOAD_20221206-004421-512000 failed: Invalid bundle: missing file config.json.
java.lang.IllegalStateException: Invalid bundle: missing file config.json
at com.datastax.oss.driver.internal.core.config.cloud.CloudConfigFactory.createCloudConfig(CloudConfigFactory.java:114)
at com.datastax.oss.driver.api.core.session.SessionBuilder.buildDefaultSessionAsync(SessionBuilder.java:876)
at com.datastax.oss.driver.api.core.session.SessionBuilder.buildAsync(SessionBuilder.java:817)
at com.datastax.oss.driver.api.core.session.SessionBuilder.build(SessionBuilder.java:835)
at com.datastax.oss.dsbulk.workflow.commons.settings.DriverSettings.newSession(DriverSettings.java:560)
at com.datastax.oss.dsbulk.workflow.load.LoadWorkflow.init(LoadWorkflow.java:145)
at com.datastax.oss.dsbulk.runner.WorkflowThread.run(WorkflowThread.java:52)
The error says that config.json is missing, but it isn't. So I'm stuck. Unless it's looking somewhere other than in the bundle I specified, but the bundle definitely has the config.json file.
This error:
...
java.lang.IllegalStateException: Invalid bundle: missing file config.json
at com.datastax.oss.driver.internal.core.config.cloud.CloudConfigFactory.createCloudConfig(CloudConfigFactory.java:114)
...
indicates that the Java driver bundled with DSBulk is unable to connect to your Astra DB because it couldn't get the configuration details from the secure connect bundle.
Please make sure that the valid secure bundle ZIP is accessible to DSBulk. You need to provide the path to the ZIP file, not just the directory. For example:
$ dsbulk ... -b /path/to/secure-connect-db.zip ...
Please check the path in your command then try again. Cheers!
👉 Please support the Apache Cassandra community by hovering over the cassandra tag above and click on Watch tag. 🙏 Thanks!
In order for you to leverage DataStax Bulk Loader (aka DSBulk, in short), you would need to pass in the secure connect bundle (SCB) correctly. What I mean when I say is that you need either the fully qualified path or the relative path to the SCB file.
The correct command in your case would look like:
./dsbulk load -url 'D:\\App\\data.csv' -k data -t data -b 'D:\\App\\secure-connect-myapp.zip' -u username -p password
Note that -b option takes in the full SCB filename along with .zip file extension.
Other Resources:
Load data using DSBulk into DataStax Astra DB
-b command-line option reference
BONUS TIP: One could easily configure everything within a configuration file and leverage that. See documentation for additional details.

Upload from GitLab to Artifactory during pipeline fails occasionally

Occasionally the first upload of artifacts during a GitLab pipeline fail.
I'm getting the following error message in the logs:
2019-08-01 13:43:14,149 [http-nio-8082-exec-187] [ERROR]
(o.j.s.b.p.t.FilePersistenceHelper:87) - Failed moving
'path_to_artifactory\filestore_pre\dbRecord123.bin' to
'path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2'.
Access to file denied null 2019-08-01 13:43:14,149
[http-nio-8082-exec-187] [ERROR] (o.a.w.s.RepoFilter :251) - Upload
request of products-stage-qa:file_to_upload failed due to {}
java.nio.file.AccessDeniedException: Failed to persist file with sha1:
5ecc5f719b4442b9b04f9010646d34917aca8ca2
This seems to happen only during builds, but not during other uploads directly by a user.
It doesn't happen all the time, and only on first tries. But I haven't found any logic when the first try fails or succeeds. It doesn't seem to have anything to do with file types or the like. I can't really determine if it has anything to do with network speeds though since I only have access to part of the infrastructure.
I found an open ticket with the same error message, but only for Conan and for us it only happens with ivy repositories
We are using Artifactory 6.9.1 and GitLab 12.0.3 starter
This looks to be a permission issue. You are getting an error message that states that the move failed due to "Access to file denied".
You can try to log in to the server using the "artifactory" user and manually move the file called "path_to_artifactory\filestore_pre\dbRecord123.bin" to "path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2" and see if you have any issues with this. To log in to the server with the "artifactory" user you can use the command "sudo -s -u artifactory".
You will also need to make sure that all filestore and its subdirectories are owned by the "artifactory" user and have the correct permissions.
Hope this helps.

How to store downloads folder on our private repo in yocto

After a successful "bitbake core-image-sato" build, i moved the downloads folder to my private repository, and then deleted downloads the folder and fetched it from my private repository.
I added BB_NO_NETWORK = "1" in local.conf, and when I tried to do "bitbake core-image-sato" it fails.
NOTE: Executing RunQueue Tasks
ERROR: gnu-config-native-20150728+gitAUTOINC+b576fa87c1-r0 do_fetch: Network access disabled through BB_NO_NETWORK (or set indirectly due to use of BB_FETCH_PREMIRRORONLY) but access requested with command LANG=C git -c core.fsyncobjectfiles=0 fetch -f --prune --progress git://git.savannah.gnu.org/config.git refs/*:refs/* (for url git://git.savannah.gnu.org/config.git)
ERROR: gnu-config-native-20150728+gitAUTOINC+b576fa87c1-r0 do_fetch: Function failed: base_do_fetch
ERROR: Logfile of failure stored in: /home/jamal/test/new_repot/build/tmp/work/x86_64-linux/gnu-config-native/20150728+gitAUTOINC+b576fa87c1-r0/temp/log.do_fetch.29816
ERROR: Task (virtual:native:/home/jamal/test/new_repot/sources/poky/meta/recipes-devtools/gnu-config/gnu-config_git.bb:do_fetch) failed with exit code '1'
It is trying to fetch the source code again, from network, as network access is disabled it fails.
Can you guys please help me in resolving this problem. Thanks for your time and patience.
The problem is missing BB_GENERATE_MIRROR_TARBALLS = "1" in local.conf. Tarballs from git repositories are automatically not created due to performance reasons, see the manual. Setting that variable enables creating of the tarballs, so they can be used later on and git server don't need to be contacted.
(Please see comments for the question for more information, we discussed the solution there. Thanks to #md.jamal for testing it.)

Spark Job fails connecting to oracle in first attempt

We are running spark job which connect to oracle and fetch some data. Always attempt 0 or 1 of JDBCRDD task fails with below error. In subsequent attempt task get completed. As suggested in few portal we even tried with -Djava.security.egd=file:///dev/urandom java option but it didn't solved the problem. Can someone please help us in fixing this issue.
ava.sql.SQLRecoverableException: IO Error: Connection reset by peer, Authentication lapse 59937 ms.
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:794)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
Issue was with java.security.egd only. Setting it through command line i.e -Djava.security.egd=file:///dev/urandom was not working so I set it through system.setproperty with in job. After that job is no more giving SQLRecoverableException
This Exception nothing to do with Apache Spark ,"SQLRecoverableException: IO Error:" is simply the Oracle JDBC driver reporting that it's connection
to the DBMS was closed out from under it while in use. The real porblem is at
the DBMS, such as if the session died abruptly. Please check DBMS
error log and share with question.
Similer problem you can find here
https://access.redhat.com/solutions/28436
Fastest way is export spark system variable SPARK_SUBMIT_OPTS before running your job.
like this: export SPARK_SUBMIT_OPTS=-Djava.security.egd=file:dev/urandom I'm using docker, so for me full command is:
docker exec -it spark-master
bash -c "export SPARK_SUBMIT_OPTS=-Djava.security.egd=file:dev/urandom &&
/spark/bin/spark-submit --verbose --master spark://172.16.9.213:7077 /scala/sparkjob/target/scala-2.11/sparkjob-assembly-0.1.jar"
export variable
submit job

SSRS Migration Sharepoint Integrated to Standalone

I'm using RS.exe to migrate from a sharepoint-integrated SSRS server to a standalone SSRS server. When I run the command I think SHOULD work, I get an error related to a missing SiteURL parameter. I want to copy all contents from the source SSRS box to the destination, so my understanding is that the defaults should be acceptable. Documentation for this migration path seems thin. I'd appreciate help in figuring out how to get this done.
Below are the command and error text:
c:\IT>rs.exe -i ssrs_migration.rss -e Mgmt2010 -s http://SPssrs/ReportServer -v ts="http://reporting/ReportServer"
Retrieve and report the list of items that will be migrated. You can cancel the script after step 1 if you do not want to start the actual migration.
Retrieving schedules:
Unhandled exception:
The value for parameter 'SiteUrl' is not specified. It is either missing from the function call, or it is set to null.
Try
rs.exe -i ssrs_migration.rss -e Mgmt2010 -s http://SPssrs/_vti_bin/ReportServer -v ts http://reporting/reportserver

Resources