Over a long enough time the Arango log files can grow large. I do not have much access to the systems that will be running Arango, so I am hoping I can have ArangoDB rotate logfiles on some timeframe and keep some number. It looks like the arango starter can do this, but is there an option to have arangod itself rotate the logs?
I have tried looking through what arango starter changes so I can add it to my configuration file, but I think I am going in the wrong direction.
Related
How this can be achieve? I have a catalina.out log in a prod server which is growing fast in space. 6.7 GB in couple of days . I had the Idea at the begging to create a cronjob to be executed 2 or 3 days a week to run a script that copy catalina log to Azure blob storage and then wipe it out with just a command "echo "" > file". But moving 2 GB to azure every day that cron job executes donĀ“t know if is the best idea either. way too big file.
Is there a way that the logs is in another server/azure storage? Where should I configuer that?
I read something about implementing log4j with tomcat, is this possible also? that catalina.out using log4j move it to other server? Howcan I achieve this?. I know that development team should check also why is growing and logging so fast this file, but in the meantime I need a solution to implement.
thanks!!
I read something about implementing log4j with tomcat, is this
possible also?
I think what you want to describe is Log Rotation, if you want to use this way, here is a blog about how to configure it.
I had the Idea at the begging to create a cronjob to be executed 2 or
3 days a week to run a script that copy catalina log to Azure blob
storage
Yes, you could choose this way to manage log, however I still have something to say. If you want to upload the log file to Azure Blob, I think you may get error for the large file . You need split large file into multiple small file. In this article, under the title Upload a file in blocks programmatically, there is detailed description.
From you description, you are not using Azure Web, so if you choose Azure Web , you could also use Azure Functions or WebJobs to do the cronjob.
If you still have other questions, please let me know.
we're using ArangoDB 3.3.5 with RocksDB as a single instance.
we did a shut down of the machine where arangod is running, and after reboot the service didn't come up again with the following warning:
Corruption: Bad table magic number
Is there a way to repair the database? Or any other ways to get rid of the problem?
This is an issue with the RocksDB files. Please try to start specifying --log.level TRACE, store the log file, open a github issue and attach the corresponding logfile.
Cheers.
I have a Spark job with some very long running tasks. When the tasks start I can go to the executors tab and see all my executors and their tasks. I can click on the stderr link to see the logs for those tasks which helps a lot for monitoring. However, after a few hours the stderr link stops working. If you click on it you get java.lang.Exception: Cannot find this log on the local disk.. I dug into a bit and the issue seems to be that something has decided to gzip the logs. That is, I can still manually find the log by ssh-ing to the worker node and looking in the correct directory (e.g. /mnt/var/log/hadoop-yarn/containers/application_1486407288470_0005/container_1486407288470_0005_01_000002/stderr.gz). It's annoying that this happens since I now can't monitor my job from the UI. Also, the files are pretty tiny so the compression doesn't seem helpful (40k uncompressed). It seems like there's a lot of things that could be causing this to happen: yarn, a logroller cron job, the log4j config in my Yarn/Spark distro, AWS (since EMR zips logs and saves 'em to S3), etc. so I'm hoping someone can point me in the right direction so I don't have to search a ton of docs.
I'm using AWS EMR at emr-5.3.0 without any custom bootstrap steps.
Just had a similar issue. I haven't searched how to stop gzip from happening, but you can access the logs using the hadoop interface.
On the left menu, under Tools > Local logs
Then browse to find the log you are interested in.
For my case, the gzip from the gui at /node/containerlogs/container_1498033803655_0037_01_000001/hadoop/stderr.gz/?start=-4096
And using local logs menu, it was in
/logs/containers/application_1498033803655_0037/container_1498033803655_0037_01_000001/stderr.gz
Hope it helps
This is my first ever post to stackoverflow, so be gentle please. ;>
Ok, I'm using slightly customized Clonezilla-Live cd's to backup the drives on four PCs. Each cd is for a specific PC, saving an image of its disk(s) to a box-specific backup folder on a samba server. That's all pretty much working. But once in a while, Something Goes Wrong, and the backup isn't completed properly. Things like: the cat bit through a cat5e cable; I forgot to check if the samba server had run out of room; etc. And it is not always readily apparent that a failure happened.
I will admit right now that I am pretty much a noob as far as linux system administration goes, even though i managed somehow to setup a centos 6 box (i wish i'd picked ubuntu...) with samba, git, ssh, and bitnami-gitlab back in february.
I've spent days and days and days trying to figure out if clonezilla leaves a simple clue in a backup as to whether it succeeded completely or not, and have come up dry. Looking in the folder for a particular backup job (on the samba server) I see that the last file written is named "clonezilla-img". It seems to be a console dump that covers the backup itself. But it does not seem to include the verification pass.
Regardless of whether the batch backup task succeeded or failed, I can run a post-process bash script automagically, that I place on my clonezilla cds. I have this set to run just fine, though its not doing a whole lot right now. What I would like this post-process script to do is determine if the backup job succeeded or not, and then rename (mv) the backup job directory to include some word like "SUCCESS" or "FAILURE". I know how to do the renaming part. It's the test for success or failure that I'm at a loss about.
Thanks for any help!
I know this is old, but I've just started on looking into doing something very similar.
For your case i think you could do what you are looking for with ocs_prerun and ocs_postrun scripts.
For my setup I'm using a pen/falsh drive for some test systems and also pxe with NFSmount. PXE and nfs are much easier to test and modify quickly.
I haven't tested this yet, but I was thinking that I might be able to search the logs in /var/log/{clonezilla.log,partclone.log} via an ocs_postrun script to validate success or failure. I haven't seen anything that indicates the result is set in the environment so I'm thinking the logs might be the quick easy method over mounting or running a crc check. Clonezilla does have an option to validate the image, the results of which might be in the local logs.
Another option might be to create a custom ocs_live_run script to do something similar. There is an example at this URL http://clonezilla.org/fine-print-live-doc.php?path=./clonezilla-live/doc/07_Customized_script_with_PXE/00_customized_script_with_PXE.doc#00_customized_script_with_PXE.doc
Maybe in the script the exit code of ocs-sr can be checked? As I said I haven't tried any of this, just some thoughts.
I updated the above to reflect the log location (/var/log). The logs are in the log folder of course. :p
Regards
I have a cluster of production servers running a Node.JS app via Forever. As far as I can tell, my options for log files are as follows:
Let Forever do it on its own, in which case it will log to ~/.forever/XXXX.log
Specify one specific log file for the entire life of the process
What I'd like to do, however, is have it log to a different file every day. eg. 20121027.log, 20121028.log, etc.
Is this possible? If so, how can it be done?
You may use some linux program like logrotate to help you w/ log rotation.
People use logrotate to rotate logs for things like apache, etc.
The config file is usually under /etc/logrotate.d
man logratate can give you more information, and here is a great tutorial: http://www.thegeekstuff.com/2010/07/logrotate-examples/