ArangoDB Corruption Bad table magic number - arangodb

we're using ArangoDB 3.3.5 with RocksDB as a single instance.
we did a shut down of the machine where arangod is running, and after reboot the service didn't come up again with the following warning:
Corruption: Bad table magic number
Is there a way to repair the database? Or any other ways to get rid of the problem?

This is an issue with the RocksDB files. Please try to start specifying --log.level TRACE, store the log file, open a github issue and attach the corresponding logfile.
Cheers.

Related

Cannot restart neo4j with error "No space left on device"

We have Neo4j 3.3.5 community version running on an Ubuntu 16.04.4 TLS and it crashed while we were creating relationships between certain nodes.
The relationships are made with at python program using the py2neo module. When the process crashed, the following error was produced by py2neo:
[2018-07-06 15:47:33] The database has encountered a critical error, and needs to be restarted. Please see database logs for more details.
[2018-07-06 15:47:33] [Errno 24] Too many open files
The latter was printed 1830 times.
The error shown in the neo4j log file is "device out of space". Then we cleared some space in the disk, now there are more than 3.5 GB available.
The problem we are how having is that we cannot restart neo4j any more.
The first error was:
"Store and its lock file has been locked by another process: /var/lib/neo4j/data/databases/graph.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)".
For this, we found a solution online that suggested to kill the java process connected to "org.neo4j.server". However, killing these processes did not help and we still cannot start the server with the command:
'''sudo service neo4j start'''
It now produces the following error:
*..."java.io.IOException: No space left on device". *
But when we check the disk there is more than enough space and Inodes seems alright.
Another suggestion is to re-instal the database. However, we would like to understand what happened before we take such a drastic step in order to prevent it from happening again.
Any ideas and/or suggestion will be greatly appreciated.

ArangoDB - Help diagnosing database corruption after system restart

I've been working with Arango for a few months now within a local, single-node development environment that regularly gets restarted for maintenance reasons. About 5 or 6 times now my development database has become corrupted after a controlled restart of my system. When it occurs, the corruption is subtle in that the Arango daemon seems to start ok and the database structurally appears as expected through the web interface (collections, documents are there). The problems have included the Foxx microservice system failing to upload my validated service code (generic 500 service error) as well as queries using filters not returning expected results (damaged indexes?). When this happens, the only way I've been able to recover is by deleting the database and rebuilding it.
I'm looking for advice on how to debug this issue - such as what to look for in log files, server configuration options that may apply, etc. I've read most of the development documentation, but only skimmed over the deployment docs, so perhaps there's an obvious setting I'm missing somewhere to adjust reliability/resilience? (this is a single-node local instance).
Thanks for any help/advice!
please note that issues like this should rather be discussed on github.

MongoDB runs FTDC

So I just installed mongoDB and when i activate run it, it runs fine, but it gives me this warning or error:
2017-07-24T12:48:44.119-0700 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
mongoDB warning
Now I have my PATH going to the right place:
C:\Program Files\MongoDB\Server\3.4\bin
I also have my data file in the right place as well: C:\data\db
The folder is full of different files from mongoDB.
I looked into my DBs and everything is still saved and no files have been corrupted or missing.
If anyone can help, that would be greatly appreciated. Thanks!
This error means that your MongoDB deployment was not shutdown cleanly. This is also shown in the screenshot of the log you posted, where it said Detected unclean shutdown. Typically this is the result of using kill -9 in UNIX environment, force killing the mongod process. This could also be the result of a hard crash of the OS.
CTRL-C should result in a clean shutdown, but it may be possible that something interferes with mongod during its shutdown process, or there is an OS hard restart during the shutdown process. To shutdown your mongod cleanly, it's usually best to send a db.shutdownServer() command inside the mongo shell connected to the server in question.
FTDC is diagnostic data that is recorded by MongoDB for troubleshooting purposes, and can be safely removed to avoid the FTDC startup warning you are seeing. In your deployment, the diagnostics data should be located in C:\data\db\diagnostics.data directory.
The WiredTiger storage engine is quite resilient and were designed to cope with hard crashes like this. However, if it's a hardware/OS crash, it's best to check your disk integrity to ensure that there is no hardware-level storage corruption. Please refer to your OS documentation for instructions on how to check for storage integrity, as methods differ from OS to OS on how to perform this.

Script to incremental backup MySQL workbench in linux

I have an issue related to how to incremental backup MySQL workbench.
Can anyone tell me the script to backup this?
I want to back up all day and keep incremental with difference file.
Can anyone give me the sample script about that?
Thank,
Veasna.
The binary log (mysql-bin.log) is essentially an incremental back-up. It allows you to revert back to a previously stable database state.
see http://dev.mysql.com/doc/mysql-backup-excerpt/5.0/en/backup-policy.html
Making Incremental Backups by Enabling the Binary Log,
http://dev.mysql.com/doc/refman/5.6/en/backup-methods.html
May i know from where you are restarting the service, through command prompt Or from your control panel.
Share your error message here, you Will get further details if anyone knows.

Short read or OOM loading DB. Unrecoverable error, aborting now

After restating my server I am not able to start the redis. From the log I found this message "Short read or OOM loading DB. Unrecoverable error, aborting now.". I am new to redis and don't know what to do to resolve the issue. Also I am not able to find any solid solution for this. Please help
Warning: This will permanently delete your database.
Use only if you don't care about the data stored or if you have a backup.
I solved the problem like this:
rm -rf /var/lib/redis/dump.rdb
rm -rf /var/run/redis.pid
service redis-server start
Then it is OK.
Cause of this error might be similar to a known one.
Your disk is full so when redis tries to create a db file it fails because there is no space left on the disk and it creates zero sized db file. Starting redis fails because of zero sized db file, in CentOS db file path is like this
/var/lib/redis/dump.rdb
In newer versions of redis this bug is fixed, if you use older version of redis simply removing dump.rdb will work for you. But do this if dump.rdb file size is zero, otherwise do not because you might lose data.

Resources