So I just installed mongoDB and when i activate run it, it runs fine, but it gives me this warning or error:
2017-07-24T12:48:44.119-0700 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
mongoDB warning
Now I have my PATH going to the right place:
C:\Program Files\MongoDB\Server\3.4\bin
I also have my data file in the right place as well: C:\data\db
The folder is full of different files from mongoDB.
I looked into my DBs and everything is still saved and no files have been corrupted or missing.
If anyone can help, that would be greatly appreciated. Thanks!
This error means that your MongoDB deployment was not shutdown cleanly. This is also shown in the screenshot of the log you posted, where it said Detected unclean shutdown. Typically this is the result of using kill -9 in UNIX environment, force killing the mongod process. This could also be the result of a hard crash of the OS.
CTRL-C should result in a clean shutdown, but it may be possible that something interferes with mongod during its shutdown process, or there is an OS hard restart during the shutdown process. To shutdown your mongod cleanly, it's usually best to send a db.shutdownServer() command inside the mongo shell connected to the server in question.
FTDC is diagnostic data that is recorded by MongoDB for troubleshooting purposes, and can be safely removed to avoid the FTDC startup warning you are seeing. In your deployment, the diagnostics data should be located in C:\data\db\diagnostics.data directory.
The WiredTiger storage engine is quite resilient and were designed to cope with hard crashes like this. However, if it's a hardware/OS crash, it's best to check your disk integrity to ensure that there is no hardware-level storage corruption. Please refer to your OS documentation for instructions on how to check for storage integrity, as methods differ from OS to OS on how to perform this.
Related
We have Neo4j 3.3.5 community version running on an Ubuntu 16.04.4 TLS and it crashed while we were creating relationships between certain nodes.
The relationships are made with at python program using the py2neo module. When the process crashed, the following error was produced by py2neo:
[2018-07-06 15:47:33] The database has encountered a critical error, and needs to be restarted. Please see database logs for more details.
[2018-07-06 15:47:33] [Errno 24] Too many open files
The latter was printed 1830 times.
The error shown in the neo4j log file is "device out of space". Then we cleared some space in the disk, now there are more than 3.5 GB available.
The problem we are how having is that we cannot restart neo4j any more.
The first error was:
"Store and its lock file has been locked by another process: /var/lib/neo4j/data/databases/graph.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)".
For this, we found a solution online that suggested to kill the java process connected to "org.neo4j.server". However, killing these processes did not help and we still cannot start the server with the command:
'''sudo service neo4j start'''
It now produces the following error:
*..."java.io.IOException: No space left on device". *
But when we check the disk there is more than enough space and Inodes seems alright.
Another suggestion is to re-instal the database. However, we would like to understand what happened before we take such a drastic step in order to prevent it from happening again.
Any ideas and/or suggestion will be greatly appreciated.
we're using ArangoDB 3.3.5 with RocksDB as a single instance.
we did a shut down of the machine where arangod is running, and after reboot the service didn't come up again with the following warning:
Corruption: Bad table magic number
Is there a way to repair the database? Or any other ways to get rid of the problem?
This is an issue with the RocksDB files. Please try to start specifying --log.level TRACE, store the log file, open a github issue and attach the corresponding logfile.
Cheers.
I am performing Forensic analysis on Host based evidence - examining partitions of a hard drive of a server.
I am interested in finding the processes all the "users" ran before the system died/rebooted.
As this isn't live analysis I can't use ps or top to see the running processes.
So, I was wondering if there is a log like /var/log/messages that shows me what processes users ran.
I have gone through a lot of logs in /var/log/* - they give me information about logins, package updates, authorization - but nothing about the processes.
If there was no "command accounting" enabled, there is no.
Chances to find something are not too big, anyway a few things to consider:
depends how gracefull death/reboot was (if processes were killed gracefully, .bash_history and similar files may be updated with recent session info)
utmp and wtmp files may give the list of active users at the reboot.
OS may be saving crash dump (depends on linux distribution). If so - You may be able to examine OS state at the moment of crash. See RedHat's crash for details (http://people.redhat.com/anderson/crash_whitepaper/).
/tmp, /var/tmp may hold some clues, what was running
any files with mtime and ctime timestamps (maybe atime also) near the time of crash
maybe You can get something usefull from swap partition (especially if reboot was related to heavy RAM usage).
So, I was wondering if there is a log like /var/log/messages that
shows me what processes users ran
Given the OS specified by the file system path of /var/log, I am assuming you are using ubuntu or some linux based server and if you are not doing live forensics while the box was running or memory forensics (where a memory capture was grabbed), AND you rebooted the system, there is no file within /var/log that will attribute processes to users. However, if the user was using the bash shell, then you could check the .bash_history file that shows the commands that were run by that user which I think is 500 (by default for the bash shell).
Alternatively, if a memory dump was made (/dev/mem or /dev/kmem), then you could used volatility to pull out processes that were run on the box. But still, I do not think you could attribute the processes to the users that ran them. You would need additional output from volatility for that link to be made.
I want to restart mongodb(the code is in a .bat file) by nodejs , if it was stopped by accident.
Is there any module can do this job?
Based on your latest two comments, there's a few things that you should take note of.
You are unable to restart the mongo windows service because there is a lock file. The CPU increases because mongod is attempting an automatic restart.
When the mongod process/service is uncleanly shutdown, there will be a lock file under your data path. Perform a dir on the data directory and I believe that you will find the lock file mongod.lock there.
You do not need to reboot your PC, simply remove the lock file (you may need to disable the service to do that) and restart the service.
There two SERVER tickets that are related to the restart behaviour on Windows -
Ticket 3582 - this fix (where an automatic restart of mongod no longer occurs) is from version 2.1.0 onwards. 2.1.x is the development branch.
MongoDB 2.2.0-rc0 was recently released and is ready for testing. It is the culmination of the 2.1.x development series.
2.2 Release Notes: http://docs.mongodb.org/manual/release-notes/2.2
Downloads: http://www.mongodb.org/downloads
Change Log: https://jira.mongodb.org/browse/SERVER/fixforversion/11218
Ticket 2, which is currently in the planning stage and will be a longer-term fix.
It would be much better to install MongoDB as a Windows Service instead of running from a .bat file.
Then you can use the normal service features such as automatic startup and recovery.
I'm running HSQLDB in server mode on a Linux server and finding that it occasionally gets killed. I'd like to be able to detect that it's stopped running and then kick off a process that starts it up again.
The DB isn't running very often, so polling would have to be very frequent--once every five minutes.
Look at Monit:
Monit is a free open source utility for managing and monitoring, processes, files, directories and filesystems on a UNIX system. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.
If you are using soem type of Debian, you might try installing HSQLDB using "apt-get install hsqldb-server. That will give you a nice install and the ability to start with "/etc/init.d/hsqldb-server start"
This will also take care of restarting it if your machine reboots. If you get everything installed correctly the problem of it getting killed may just go away.
I was running into some weird issues starting and stopping hsqldb, but once I got it installed correctly everything took care of itself.