I trust you all doing good.
we planning to implement log rotation for below file.
stdout.log
we use below log rotation configuration file.
/usr/local/rms/kafka/kafka-connect-fluentd/stdout.log {
daily
rotate 7
maxsize 100M
minsize 10M
copytruncate
delaycompress
compress
notifempty
missingok
}
we have noticed the file is rotating and file is truncated.But application does not write logs to new file.
we tried to send the HUP signal and it did not work.
-rw-r--r-- 1 appuser appuser 8.2M Feb 20 03:11 stdout.log.4.gz
-rw-r--r-- 1 appuser appuser 4.0M Feb 20 23:48 stdout.log.3.gz
-rw-r--r-- 1 appuser appuser 7.6M Feb 20 23:49 stdout.log.2.gz
-rw-r--r-- 1 appuser appuser 2.1G Feb 21 03:39 stdout.log.1
-rw-r--r-- 1 appuser appuser 2.2G Feb 21 14:15 stdout.log
The application itself do not have a reload option, We stop the application and start the application when we need to reload or restart the application.
we use below command to bring up the application
nohup connect-standalone ${BASE}/connect-standalone.properties
${BASE}/FluentdSourceConnector.properties >& ${BASE}/stdout.log &
we use below command to kill the application
kill -9 <processid>
How do we implement a log rotating mechanism for this situation ?
>& FILE
Is the obsolete syntax for:
> FILE 2>&1
The > FILE redirects standard output of the command to file named FILE. However, before that happens, "if the file does not exist it is created; if it does exist it is truncated to zero size".
So each time you restarted your command, your file was (properly) truncated by the shell. What you want is to append to the file. Do that by using >> redirection. Including that you want to redirect both stdout and stderr, use:
2>&1 >>FILE
The 2>&1 redirects stderr to stdout and the >>FILE appends stdout to FILE.
I was trying to redirect the TOP command output in the particular file in every 5 minutes with the below command.
top -b -n 1 > /var/tmp/TOP_USAGE.csv.$(date +"%I-%M-%p_%d-%m-%Y")
-rw-r--r-- 1 root root 0 Dec 9 17:20 TOP_USAGE.csv.05-20-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:25 TOP_USAGE.csv.05-25-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:30 TOP_USAGE.csv.05-30-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:35 TOP_USAGE.csv.05-35-PM_09-12-2015
Hence i made a very small (1 line) shell script for this, so that i can run in every 5 minutes via cronjob.
Problem is when i run this script manually then i can see the output in the file, however when this script in running automatically, file is generating in every 5 minutes but there is no data (aka file is empty)
Can anyone please help me on this?
I now modified the script and still it's the same.
#!/bin/sh
PATH=$(/usr/bin/getconf PATH)
/usr/bin/top -b -n 1 > /var/tmp/TOP_USAGE.csv.$(date +"%I-%M-%p_%d-%m-%Y")
I met the same problem as you.
Top command with -b option must be added.Saving top output to variable before we use it.
the scripts are below
date >> /tmp/mysql-mem-moniter.log
MEM=/usr/bin/top -b -n 1 -u mysql
echo "$MEM" | grep mysql >> /tmp/mysql-mem-moniter.log
Most likely the environment passed to your script from cron is too minimal. In particular, PATH may not be what you think it is (no profiles are read by scripts started from cron).
Place PATH=$(/usr/bin/getconf PATH) at the start of your script, then run it with
/usr/bin/env -i /path/to/script
Once that works without error, it's ready for cron.
Is there a way to find what path a command has had it's output redirected to (if it has been)?
I tried using:
ps -p PID -o cmd
Thinking I could look for a > and extract the path from that, but the output doesn't have that part in it. I'm pretty sure it hasn't just been truncated.
You can use the proc file system /proc/self/fd for this
readlink /proc/self/fd/1
for stdout or 2 for stderr.
If you know the PID, just inspect /proc/ID/fd/1. It should be linked to the actual path:
$ watch date > /tmp/1 &
[1] 27346
$ ls -l /proc/27346/fd/1
l-wx------ 1 choroba users 64 2013-02-15 16:28 /proc/27346/fd/1 -> /tmp/1
Use the lsof (list open files) command to see what files a process has open for writing.
For example:
$ lsof -p 31714
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
bash 31714 dogbane 0u CHR 136,4 6 /dev/pts/4
bash 31714 dogbane 1w REG 8,1 15 2032202 /tmp/t
The w in the FD (file descriptor) column means that /tmp/t is open for writing.
How about it?
[root#us04 ~]# ls -l /proc/14170/exe
lrwxrwxrwx 1 root root 0 Feb 15 10:36 /proc/14170/exe -> /usr/sbin/httpd
One more example:
[root#us04 ~]# readlink -f /proc/5352/exe
/sbin/syslogd
I have a script in /etc/cron.daily/backup.sh file is allowed to execute and run but do not start happening, I read the manual and used the search but not mastered decision.
ls -l /etc/cron.daily/
total 52
-rwxr-xr-x 1 root root 8686 2009-04-17 10:27 apt
-rwxr-xr-x 1 root root 314 2009-02-10 19:45 aptitude
-rwxr-xr-x 1 root root 103 2011-05-22 19:08 backup.sh
-rwxr-xr-x 1 root root 502 2008-11-05 03:43 bsdmainutils
-rwxr-xr-x 1 root root 89 2009-01-27 00:55 logrotate
-rwxr-xr-x 1 root root 954 2009-03-19 16:17 man-db
-rwxr-xr-x 1 root root 646 2008-11-05 03:37 mlocate
The cron job filename can't have a period in it on certain ubuntus. See this. Particularly, this quote within:
Although the directories contain periods in their names, run-parts
will not accept a file name containing a period and will fail silently
when encountering them
Properly, this is a problem with run-parts, which the ubuntu cron runs, and not with cron itself. Still, it's what bit me.
pls check:
1a) is the script executable and has correct owner/group settings?
1b) does it start with the correct #! line? and do you specify the full path to the shell you're using,
e.g. #!/bin/bash ?
2) does the script generate an error while being executed?
e.g. can you write to a log file from it, and do you see the log messages?
also: check in the email inbox of the user who owns the crontab -- errors are emailed to the user / e.g. root
what does the output of ls -l /etc/cron.daily/ look like? can you post that?
NOTE:
you can always create a crontab entry for this yourself, outside of those cron.xxx directory structure ;-)
See: man 5 crontab
10 1 * * * /somewhere/backup.sh >> /somewhere/backup.log 2>&1
this has the advantage that you get to pick the exact time when it runs (e.g. 1:10am here), and you can redirect STRERR and STDOUT to append to a log file for that particular script
For testing purposes you could run it ever 10 minutes, like this:
0,10,20,30,40,50 * * * * /somewhere/backup.sh >> /somewhere/backup.log 2>&1
do touch /somewhere/backup.log to make sure it exists
In the shell you can do redirection, > <, etc., but how about AFTER a program is started?
Here's how I came to ask this question, a program running in the background of my terminal keeps outputting annoying text. It's an important process so I have to open another shell to avoid the text. I'd like to be able to >/dev/null or some other redirection so I can keep working in the same shell.
Short of closing and reopening your tty (i.e. logging off and back on, which may also terminate some of your background processes in the process) you only have one choice left:
attach to the process in question using gdb, and run:
p dup2(open("/dev/null", 0), 1)
p dup2(open("/dev/null", 0), 2)
detach
quit
e.g.:
$ tail -f /var/log/lastlog &
[1] 5636
$ ls -l /proc/5636/fd
total 0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 0 -> /dev/pts/0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 1 -> /dev/pts/0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 2 -> /dev/pts/0
lr-x------ 1 myuser myuser 64 Feb 27 07:36 3 -> /var/log/lastlog
$ gdb -p 5636
GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Attaching to process 5636
Reading symbols from /usr/bin/tail...(no debugging symbols found)...done.
Reading symbols from /lib/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/libpthread.so.0...(no debugging symbols found)...done.
[Thread debugging using libthread_db enabled]
[New Thread 0x7f3c8f5a66e0 (LWP 5636)]
Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
(no debugging symbols found)
0x00007f3c8eec7b50 in nanosleep () from /lib/libc.so.6
(gdb) p dup2(open("/dev/null",0),1)
[Switching to Thread 0x7f3c8f5a66e0 (LWP 5636)]
$1 = 1
(gdb) p dup2(open("/dev/null",0),2)
$2 = 2
(gdb) detach
Detaching from program: /usr/bin/tail, process 5636
(gdb) quit
$ ls -l /proc/5636/fd
total 0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 0 -> /dev/pts/0
lrwx------ 1 myuser myuser 64 Feb 27 07:36 1 -> /dev/null
lrwx------ 1 myuser myuser 64 Feb 27 07:36 2 -> /dev/null
lr-x------ 1 myuser myuser 64 Feb 27 07:36 3 -> /var/log/lastlog
lr-x------ 1 myuser myuser 64 Feb 27 07:36 4 -> /dev/null
lr-x------ 1 myuser myuser 64 Feb 27 07:36 5 -> /dev/null
You may also consider:
using screen; screen provides several virtual TTYs you can switch between without having to open new SSH/telnet/etc, sessions
using nohup; this allows you to close and reopen your session without losing any background processes in the... process.
This will do:
strace -ewrite -p $PID
It's not that clean (shows lines like: write(#,<text you want to see>) ), but works!
You might also dislike the fact that arguments are abbreviated. To control that use the -s parameter that sets the maximum length of strings displayed.
It catches all streams, so you might want to filter that somehow:
strace -ewrite -p $PID 2>&1 | grep "write(1"
shows only descriptor 1 calls. 2>&1 is to redirect STDERR to STDOUT, as strace writes to STDERR by default.
Redirect output from a running process to another terminal, file, or screen:
tty
ls -l /proc/20818/fd
gdb -p 20818
Inside gdb:
p close(1)
p open("/dev/pts/4", 1)
p close(2)
p open("/tmp/myerrlog", 1)
q
Detach a running process from the bash terminal and keep it alive:
[Ctrl+z]
bg %1 && disown %1
[Ctrl+d]
Explanation:
20818 - just an example of running process PID
p - print result of gdb command
close(1) - close standard output
/dev/pts/4 - terminal to write to
close(2) - close error output
/tmp/myerrlog - file to write to
q - quit gdb
bg %1 - run stopped job 1 on background
disown %1 - detach job 1 from terminal
[Ctrl+z] - stop the running process
[Ctrl+d] - exit terminal
riffing off vladr's (and others') excellent research:
create the following two files in the same directory, something in your path, say $HOME/bin:
silence.gdb, containing (from vladr's answer):
p dup2(open("/dev/null",0),1)
p dup2(open("/dev/null",0),2)
detach
quit
and silence, containing:
#!/bin/sh
if [ "$0" -a "$1" ]; then
gdb -p $1 -x $0.gdb
else
echo Must specify PID of process to silence >&2
fi
chmod +x ~/bin/silence # make the script executable
Now, next time you forget to redirect firefox, for example, and your terminal starts getting cluttered with the inevitable "(firefox-bin:5117): Gdk-WARNING **: XID collision, trouble ahead" messages:
ps # look for process xulrunner-stub (in this case we saw the PID in the error above)
silence 5117 # run the script, using PID we found
You could also redirect gdb's output to /dev/null if you don't want to see it.
Not a direct answer to your question, but it's a technique I've been finding useful over the last few days: Run the initial command using 'screen', and then detach.
this is bash script part based on previous answers, which redirect log file during execution of an open process, it is used as postscript in logrotate process
#!/bin/bash
pid=$(cat /var/run/app/app.pid)
logFile="/var/log/app.log"
reloadLog()
{
if [ "$pid" = "" ]; then
echo "invalid PID"
else
gdb -p $pid >/dev/null 2>&1 <<LOADLOG
set scheduler-locking on
p close(1)
p open("$logFile", 1)
p close(2)
p open("$logFile", 1)
q
LOADLOG
LOG_FILE=$(ls /proc/${pid}/fd -l | fgrep " 1 -> " | awk '{print $11}')
echo "log file set to $LOG_FILE"
fi
}
reloadLog
updated: for gdb v7.11 and later, set scheduler-locking on or other any options mentioned here is required, because after attaching gdb, it does not stop all running threads and you may not able to close/open your log file because of file usage.
Dupx is a simple *nix utility to redirect standard output/input/error of an already running process.
https://www.isi.edu/~yuri/dupx/
You can use reredirect (https://github.com/jerome-pouiller/reredirect/).
Type
reredirect -m FILE PID
and outputs (standard and error) will be written in FILE.
reredirect README also explains how to restore original state of process, how to redirect to another command or to redirect only stdout or stderr.
reredirect also provide a script called relink that allows to redirect to current terminal:
relink PID
relink PID | grep usefull_content
(reredirect seems to have same features than Dupx described in another answer but, it does not depends on Gdb).