I run the NPM global module Forever on my Node server (on Azure). Always works fine to keep all my projects running.
There is 1 project on my server that perhaps has an issue or something that causes Forever to keep outputting to the log. There are 2 Forever logs that grow rapidly and to huge sizes:
9.7Gb /home/azureuser/.forever/P_lf.log
1.3Gb /home/azureuser/.forever/IEJR.log
Whilst I probably need to find out what's wrong with my project and fix it, I also need to fix this logging problem. My research shows I may need to do something with logrotate to stop this much disk space being used.
Any ideas?
There are 2 moments:
You can edit Your app to log only necessary things and errors (can catch errors to prevent them), so Your logs will be smaller.
You can set cron job to cleanup log files every night (let's say every 03:00 AM):
0 3 * * * truncate -s 0 /home/azureuser/.forever/*.log
or odd days (to be able to keep logs one day for debug purposes):
0 3 * * 1,3,5 truncate -s 0 /home/azureuser/.forever/*.log
Related
I am having a problem I have spent multiple days searching for an answer to. I administer a system running CentOS 8 (yes, I know, move to another distribution - we are within the month). The problem I am having is that if I do an "ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today" I find that the system's dbus-daemon is found to do this: "msg='avc: received setenforce notice (enforcing=0) exe="/usr/bin/dbus-daemon"". A few minutes later (like 2 and a half minutes), the daemon does this: " msg='avc: received setenforce notice (enforcing=1) exe="/usr/bin/dbus-daemon"".
I cannot find any references on the net to any daemon that does this and it concerns me that it may be a security failure of which our company does the best it can to eliminate. Can anyone enlighten me as to what is happening?
This question is basically a duplicate of this one, except that the accepted answer on that question was, "it's not actually slower, you just weren't running the timing command correctly."
In my case, it actually is slower! :)
I'm on Windows 10. Here's the output from PowerShell's Measure-Command (the TotalMilliseconds line represents wall-clock time):
PS> Measure-Command {npm --version}
Days : 0
Hours : 0
Minutes : 0
Seconds : 1
Milliseconds : 481
Ticks : 14815261
TotalDays : 1.71472928240741E-05
TotalHours : 0.000411535027777778
TotalMinutes : 0.0246921016666667
TotalSeconds : 1.4815261
TotalMilliseconds : 1481.5261
A few other numbers, for comparison:
'{.\node_modules.bin\mocha}': 1300ms
'npm run test' (just runs mocha): 3300ms
npm help: 1900ms.
the node interpreter itself is ok: node -e 0: 180ms
It's not just npm that's slow... mocha reports that my tests only take 42ms, but as you can see above, it takes 1300ms for mocha to run those 42ms of tests!
I've had the same trouble. Do you have Symantec Endpoint Protection? Try disabling Application and Device Control in Change Settings > Client Management > General > Enable Application and Device Control.
(You could disable SEP altogether; for me the command is: "%ProgramFiles(x86)%\Symantec\Symantec Endpoint Protection\smc.exe" -stop.)
If you have some other anti-virus, there's likely a way to disable it as well. Note that closing the app in the Notification area might not stop the virus protection. The problem is likely with any kind of realtime protection that scans a process as it starts. Since node and git are frequently-invoked short-running processes, this delay is much more noticeable.
In Powershell, I like to measure the performance of git status, both before and after that change: Measure-Command { git status }
I ran into this problem long ago, I think it was an extension that I had. I use Visual Studio Code, and when it has no extensions and running bash:
//GIT Bash Configuration
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe",
it actually flies, I use both OS, so I can tell the difference. Try using different tools and disabling some.
And if that still doesn't work, check your antivirus, maybe it's slowing down the process?
Been googling this all day, with no luck. Decided to uninstall Java to see what would happen and bingo, solved my problem. I know this is an old thread, but I found myself coming back to it so many times to see if I missed anything.
off topic:
Got to figure out how to get Java working now 🤦
Didn't know about Measure-Command, so I'll be using that in the future!
I had this problem. When I tried to run an application of my job in my home, I realized that in my job's laptop the app starts on 2 minutes but in my personal notebook it tooked 5 minutes or more.
After trying some possible solutions, finally I found the problem was that I installed Git Bash in my D drive partition which is a HDD. When I re-installed in C drive whichs is a SSD then the app started faster. However, I also moved Node.js to C drive to prevent another issues.
I'm a beginner at linux server configuration and I don't have much knowledges about it. I use a linux ubuntu root server for a website with a postgres database. My operation system on my PC is windows 7.
After some minutes (I'm not quite sure, how long it takes, maybe 5 minutes or so, not a lot) without doing anything I lose my connection, which is really annoying. I googled how to fix it, but didn't really found a solution, or didn't understand them.
For example I tried to update my postgresql.conf and edited this values:
#tcp_keepalives_idle
#tcp_keepalives_interval
#tcp_keepalives_count
which didn't really help. I want to have to opportunity to idle for 30 minutes, without losing the connection.
Then I read another solution:
http://www.gnugk.org/keepalive.html
I honestly didn't really understand, what those lines I have to add, are for.
Because when I check this:
sysctl -A | grep net.ipv4
it shows me:
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
which should mean that I won't lose my connection for 2 hours, doesn't it?
I also don't really understand what does lines are for... Does that mean, that every service a client is connected, he will still be connected for 2 hours, even if he is inactive? No matter if it is for example postgresql or ftp or something?
Please help me!
Thanks!
André
Okay it seems, that I solved the problem. Allthough there is no answer here, I just want to explain my solution.
My ISP seems to break up my connection very fast, when I idle on a connection for just a few minutes. Seems to be a problem with CGN (Carrier-grade NAT).
I solved the problem, to setup keepalive packages with sysctl.
So I used those parameter values:
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 180
which means that after 3 minutes the first keepalive package will be sent and when there is no connection alive every minute (60 sec) a new keep alive package will be sent and this 20 times.
All in all that prevents my connection to break up.
Maybe if another one is having that issues too here, that might be a solution for it.
The log files (in /root/.forever) created by forever have reached a large size and is almost filling up the hard disk.
If the log file were to be deleted while the forever process is still running, forever logs 0 will return undefined. The only way for logging of the current forever process to resume is to stop it and start the node script again.
Is there a way to just trim the log file without disrupting logging or the forever process?
So Foreverjs will continue to write to the same file handle and ideally would support something that allows you to send it a signal and rotate to a different file.
Without that, which requires code change on the Forever.js package, your options look like:
A command line version:
Make a backup
Null out the file
cp forever-guid.log backup && :> forever-guid.log;
This has the slight risk of if your writing to the log file at a speedy pace, that you'll end up writing a log line between the backup and the nulling, resulting in the loss of the log line.
Use Logrotate w/copytruncate
You can set up logrotate to watch the forever log directory to copy and truncate automatically based on filesize or time.
Have your node code handle this
You can have your logging code look at how many lines the log file is and then doing the copy truncate - this would allow you to avoid the potential data loss.
EDIT: I had originally thought that split and truncate could do the job. They probably can but an implementation would look really awkward. Split doesn't have a good way to splitting the file into a short one (the original log) and a long one (the backup). Truncate (which in addition to the fact that it's not always installed) doesn't reset the write pointer, so forever just writes the same byte as it would have, resulting in strange data.
You can truncate the log file without losing its handle (reference).
cat /dev/null > largefile.txt
I am trying to create some capacity planning reports and one of the requrements is to have info on Memory usage for a few Unix Servers.
Now my knowledge of Unix is very low. I usually just log on and run a few scripts.
But for this report I need to gather VMStat data and produce reports based on previous the previous weeks data broken down by hour which is an average of Vmstat data taken every 10 seconds.
So first question: is VMStat logging on by default and if so what location on the server is the data output to?
If not how can I set this up?
Thanks
vmstat is a command that you run.
To generate one week of Virtual Memory stats spaced out at ten second intervals (less the last one) is 60,479 10 second intervals
So the command you want is:
nohup vmstat 10 604879 > myvmstatfile.dat &
This will make a very big file myvmstatfile.dat
EDIT: RobKielty (The & will put this job in the background, the nohup will prevent the task from hanging up when you logout of the command shell. If you ran this command it would be prudent to monitor the disk partition to which this file was being written to. Use df -h /path/to/directory/where/outputfile/resides to monitor the disk space usage.)
I have no idea what you need to do with the data, so I can't help you there.
Create a crontab entry (crontab -e) like this
0 0 * * 0 /path/to/my/vmstat_script.sh
The file vmstat_script.sh will contain the follow bash script commands.
#!/bin/bash
# vmstat_script.sh
vmstat 10 604879 > myvmstatfile.dat
mv myvmstatfile.dat myvmstatfile.dat.`date +%Y-%m-%d`
This will create one file per week with a name like myvmstatfile.dat.2012-07-01
The command I use for monitoring the Linux vm metrics is below:
nohup vmstat 10 720| (while read; do echo "$(date +%d-%m-%Y" "%H:%M:%S) $REPLY"; done) >> nameofLogfile.log
Here nohup is used for running the process in background.
It will run for 2 hours with interval of 10 secs.
This is the best command for generating graphs and reports as timestamp will also be included in logs along with different metrics, so that we can filter the logs accordingly.