procmail disregards /etc/group? - linux

sample procmailrc:
SHELL=/bin/bash
LOGFILE=$HOME/procmail.log
VERBOSE=yes
:0
* ^Subject: envdump please$
{
LOG="`id`"
:0
/dev/null
}
/etc/group file contains (note the other usernames are vain attempts to make this work):
someuser:x:504:
s3:x:505:someuser,someotheruser,postfix,postdrop,mail,root
If I run as "someuser" the command id:
[someuser#lixyz-pqr ~]$ id
uid=504(someuser) gid=504(someuser) groups=504(someuser),505(s3)
However when I run procmail by sending an email with the subject "envdump please", the 505/s3 group disappears (this is in procmail.log):
procmail: [17618] Mon Dec 19 17:39:50 2011
procmail: Match on "^Subject: envdump please$"
procmail: Executing "id"
procmail: Assigning "LOG=uid=504(someuser) gid=504(someuser) groups=504(someuser)"
uid=504(someuser) gid=504(someuser) groups=504(someuser)procmail: Assigning "LASTFOLDER=/dev/null"
this server is running Fedora 14 with Postfix 2.7.5

Procmail wasn't installed setuid.
for background, it should look like:
[root#li321-238 postfix]# ls -l /usr/bin/procmail
-rwsr-sr-x. 1 root mail 92816 Jul 28 2009 /usr/bin/procmail
which you can set up via:
chmod ug+s /usr/bin/procmail

Related

How to I properly prevent systemd suspend using a script in /lib/systemd/system-sleep/

I'm fairly new to Linux and trying to learn. I'm using Plex Media Server and I'm trying to prevent the system from sleeping while streaming a file. I've searched the internet over the last few days and none of the solutions seem to work. One solution I feel is almost getting me there, but it's not quite working. Here is the script I've placed (and made executable) in /lib/systemd/system-sleep/ (based on this site).
#!/bin/bash
PATH=/sbin:/usr/sbin:/bin:/usr/bin
X_DISPLAY_USERNAME=myusername
plexresume()
{
number_of_sessions=$(curl -s localhost:32400/status/sessions? | sed -n "s/.*MediaContainer size=\"\(.*\)\".*/\1/p")
if [ ${number_of_sessions} -gt 0 ]; then
echo "[$(date +"%Y.%m.%d-%T")] Number of streamers = ${number_of_sessions} Plex session active, cancel suspend" >> /tmp/plex_sleep_log
return 1
else
echo "[$(date +"%Y.%m.%d-%T")] Number of streamers = ${number_of_sessions} Plex session -IN-active, going to sleep now zzzzzzzz........" >> /tmp/plex_sleep_log
return 0
fi
}
plexkeepalive()
{
echo "[$(date +"%Y.%m.%d-%T")] Resuming!!..." >> /tmp/plex_sleep_log
su ${X_DISPLAY_USERNAME} -c "DISPLAY=:0 /usr/bin/xdotool getmouselocation | grep 'x:1 y:1 '" > /dev/null
if [ "$?" == "0" ]; then
su ${X_DISPLAY_USERNAME} -c "DISPLAY=:0 /usr/bin/xdotool mousemove 9 9"
else
su ${X_DISPLAY_USERNAME} -c "DISPLAY=:0 /usr/bin/xdotool mousemove 1 1"
fi
return 0
}
case "$1" in
pre)
plexresume
;;
post)
plexkeepalive
;;
esac
I know it's executing because it's printing to the log file. But even when it's printing that Plex is active, it's still suspending the system. I've manually run the script using sudo outside of systemd and checking the value of $? after, which is 1.
When I use
journalctl -b -u systemd-suspend.service
I see the following:
Sep 26 12:47:03 systemname systemd[1]: Starting System Suspend...
Sep 26 12:47:03 systemname [141890]: /usr/lib/systemd/system-sleep/plexkeepalive failed with exit status 1.
Sep 26 12:47:03 systemname systemd-sleep[141887]: Entering sleep state 'suspend'...
One time I got a successful result, but I'm not sure how it happened:
Sep 26 10:51:45 systemname systemd-sleep[112491]: Entering sleep state 'suspend'...
Sep 26 10:52:25 systemname systemd-sleep[112491]: Failed to put system to sleep. System resumed again: Device or resource busy
Sep 26 10:52:25 systemname su[112556]: (to myusername) root on none
Sep 26 10:52:25 systemname su[112556]: pam_unix(su:session): session opened for user myusername(uid=1000) by (uid=0)
Sep 26 10:52:25 systemname su[112556]: pam_unix(su:session): session closed for user myusername
Sep 26 10:52:25 systemname su[112592]: (to myusername) root on none
Sep 26 10:52:25 systemname su[112592]: pam_unix(su:session): session opened for user myusername(uid=1000) by (uid=0)
Sep 26 10:52:25 systemname su[112592]: pam_unix(su:session): session closed for user myusername
Sep 26 10:52:25 systemname systemd[1]: systemd-suspend.service: Main process exited, code=exited, status=1/FAILURE
Sep 26 10:52:25 systemname systemd[1]: systemd-suspend.service: Failed with result 'exit-code'.
Sep 26 10:52:25 systemname systemd[1]: Failed to start System Suspend.
Sep 26 10:52:25 systemname systemd[1]: systemd-suspend.service: Consumed 2.732s CPU time.
Any help on this issue would be appreciated. I don't understand why returning 1 from the script is not preventing systemd-suspend.service from running. Thank you!

Cron script executed but no output

I've taken a look to many different topics and did not find an answer to my problem. I created 2 bash scripts on my Ubuntu server and I'm trying to execute them periodically. It seems they are running, but they produce nothing. They are executable:
drwxr-xr-x 14 root root 4096 Mar 14 18:02 ..
-rwxr-xr-x 1 root root 2623 Apr 16 21:18 backup.pl
-rw-r--r-- 1 root root 87066352 May 10 21:37 full_site_backup-10-4-2018.tar.gz
-rwxr-xr-x 1 root root 530 May 11 20:21 checkHealth.sh
drwxr-xr-x 2 root root 4096 May 11 20:35 .
So here is one of my scripts:
#!/bin/bash
# log stdout and stderr to two different files
exec >>/var/log/test.log 2>>/var/log/test.err.log
# ...and log every command we try to execute to stderr (aka looog.err.log)
# set -x
CODE=$(curl -s -o /dev/null -I -A "myuseragent" -w "%{http_code}" https://www.xxxxxxxxxxxxx.xxx/xxxx)
DATE=$(date)
if [ $CODE -gt 300 ]
then
service mysql restart
service tomcat8 restart
>&2 echo "$DATE - KO !!!!!! code retour $CODE"
else
echo "$DATE - OK, code $CODE"
fi
and here is my sudo crontab -e :
# m h dom mon dow command
0 2 * * * root /usr/bin/perl /var/backup/backup.pl
* * * * * root /bin/sh /var/backup/checkHealth.sh
and here is my sudo tail -f /var/log/cron.log:
May 11 20:39:01 ns381471 CRON[10778]: (root) CMD (root /bin/sh /var/backup/checkHealth.sh)
May 11 20:39:26 ns381471 crontab[10823]: (root) BEGIN EDIT (root)
May 11 20:40:01 ns381471 CRON[10880]: (root) CMD (root /bin/sh /var/backup/checkHealth.sh)
May 11 20:40:01 ns381471 CRON[10879]: (root) CMD (/usr/local/rtm/bin/rtm 2 > /dev/null 2> /dev/null)
May 11 20:40:30 ns381471 crontab[10823]: (root) END EDIT (root)
May 11 20:41:01 ns381471 CRON[10974]: (root) CMD (/usr/local/rtm/bin/rtm 2 > /dev/null 2> /dev/null)
May 11 20:41:01 ns381471 CRON[10975]: (root) CMD (root /bin/sh /var/backup/checkHealth.sh)
May 11 20:42:01 ns381471 CRON[11070]: (root) CMD (/usr/local/rtm/bin/rtm 2 > /dev/null 2> /dev/null)
May 11 20:42:01 ns381471 CRON[11071]: (root) CMD (root /bin/sh /var/backup/checkHealth.sh)
Any help would be appreciated. Thanks
Ok for those who are in the same situation, my crontab was
# m h dom mon dow command
0 2 * * * root /usr/bin/perl /var/backup/backup.pl
* * * * * root /bin/sh /var/backup/checkHealth.sh
It appears that I just copied the syntax you can find on many cron tutorials about editing /etc/crontab, but in my case I did a sudo crontab -e, so I did not have to specifiy the user (root), so the working entries were :
# m h dom mon dow command
0 2 * * * /usr/bin/perl /var/backup/backup.pl
* * * * * /bin/sh /var/backup/checkHealth.sh
or simplier :
# m h dom mon dow command
0 2 * * * /var/backup/backup.pl
* * * * * /var/backup/checkHealth.sh

Unable to run cron job with standard user account

The same job can be executed as root, but it can't execute as a standard user.
Is it permission problem or I need to change anything, I have no idea on it.
Thanks
SunOS 5.10 Generic_150400-30 sun4v sparc SUNW,SPARC-Enterprise-T5120
Command:
1) login as a root
2) crontab -l
* * * * * /usr/bin/date > /tmp/root.log
3) /tmp/root.log is here
1) login as a Non-root user
2) crontab -l
* * * * * /usr/bin/date > /tmp/non-root.log
3) /tmp/non-root.log is not here
The following permissions are OK for the binary file date
-bash-3.2# ls -l /usr/bin/date
-r-xr-xr-x 1 root bin 11056 Jan 22 2005 /usr/bin/date
-bash-3.2#
If the permissions are OK check your cron log on /var/cron/log file
-bash-3.2# tail /var/cron/log
< root 24592 c Fri Oct 20 18:50:21 2017
> CMD: /usr/bin/date > /tmp/non-root.log
> user 25192 c Fri Oct 20 18:51:00 2017
< user 25192 c Fri Oct 20 18:51:00 2017
> CMD: /scripts/collectdata.sh > /dev/null 2>&1
> root 25769 c Fri Oct 20 18:52:00 2017
< root 25769 c Fri Oct 20 18:52:00 2017
> CMD: /scripts/collectdata.sh > /dev/null 2>&1
> root 26853 c Fri Oct 20 18:54:00 2017
< root 26853 c Fri Oct 20 18:54:00 2017
-bash-3.2#
Thanks all, I finally found out the issue.
The reason is that non-root account is locked out, I think it maybe someone did many failure attempt which make this locked.
After I passwd -u "Account", the job can be run as expected. Thanks~

How to monitor newly created file in a directory with bash?

I have a log directory that consists of bunch of log files, one log file is created once an system event has happened. I want to write an oneline bash script that always monitors the file list and display the content of the newly created file on the terminal. Here is what it looks like:
Currently, all I have is to display the content of the whole directory:
for f in *; do cat $f; done
It lacks the monitoring feature that I wanted. One limitation of my system is that I do not have watch command. I also don't have any package manager to install fancy tools. Raw BSD is all I have. I do have tail, I was thinking of something like tail -F $(ls) but this tails each file instead of the file list.
In summary, I want to modify my script such that I can monitor the content of all newly created files.
First approach - use a hidden file in you dir (in my example it has a name .watch). Then you one-liner might look like:
for f in $(find . -type f -newer .watch); do cat $f; done; touch .watch
Second approach - use inotify-tools: https://unix.stackexchange.com/questions/273556/when-a-particular-file-arrives-then-execute-a-procedure-using-shell-script/273563#273563
You can cram it into a one-liner if you want, but I'd recommend just running the script in the background:
#!/bin/bash
[ ! -d "$1" ] && {
printf "error: argument is not a valid directory to monitory.\n"
exit 1
}
while :; fname="$1/$(inotifywait -q -e modify -e create --format '%f' "$1")"; do
cat "$fname"
done
Which will watch the directory given as the first argument, and cat any new or changed file in that directory. Example:
$ bash watchdir.sh my_logdir &
Which will then cat new or changed files in my_logdir.
Using inotifywait in monitor mode
First this little demo:
Open one terminal and run this:
ext=(php css other)
while :;do
subname=''
((RANDOM%10))||printf -v subname -- "-%04x" $RANDOM
date >/tmp/test$subname.${ext[RANDOM%3]}
sleep 1
done
This will create randomly files named /tmp/test.php, /tmp/test.css and /tmp/test.other, but randomly (approx 1 time / 10), the name will be /tmp/test-XXXX.[css|php|other] where XXXX is an hexadecimal random number.
Open another terminal and run this:
waitPaths=(/{home,tmp})
while read file ;do
if [ "$file" ] &&
( [ -z "${file##*.php}" ] || [ -z "${file##*.css}" ] ) ;then
(($(stat -c %Y-%X $file)))||echo -n new
echo file: $file, content:
cat $file
fi
done < <(
inotifywait -qme close_write --format %w%f ${waitPaths[*]}
)
This may produce something like:
file: /tmp/test.css, content:
Tue Apr 26 18:53:19 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:21 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:23 CEST 2016
file: /tmp/test.css, content:
Tue Apr 26 18:53:25 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:27 CEST 2016
newfile: /tmp/test-420b.php, content:
Tue Apr 26 18:53:28 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:29 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:30 CEST 2016
file: /tmp/test.php, content:
Tue Apr 26 18:53:31 CEST 2016
Some explanation:
waitPaths=(/{home,tmp}) could be written waitPaths=(/home /tmp) or for only one directory: waitPaths=/var/log
if condition search for filenames matching *.php or *.css
(($(stat -c %Y-%X $file)))||echo -n new will compare creation and modification time.
inotifywait
-q to stay quiet (don't print more then required)
-m for monitor mode: Command don't termine, but print each matching event.
-e close_write react only to specified kind of event.
-f %w%f Output format: path/file
Another way:
There is a more sophisticated sample:
Listenning for two kind of events (CLOSE_WRITE | CREATE)
Using a list of new files flags for knowing which files are new when CLOSE_WRITE event occur.
In second console, hit Ctrl+C, or in new terminal, tris this:
waitPaths=(/{home,tmp})
declare -A newFiles
while read path event file; do
if [ "$file" ] && ( [ -z "${file##*.php}" ] || [ -z "${file##*.css}" ] ); then
if [ "$event" ] && [ -z "${event//*CREATE*}" ]; then
newFiles[$file]=1
else
if [ "${newFiles[$file]}" ]; then
unset newFiles[$file]
echo NewFile: $file, content:
sed 's/^/>+ /' $file
else
echo file: $file, content:
sed 's/^/> /' $path/$file
fi
fi
fi
done < <(inotifywait -qme close_write -e create ${waitPaths[*]})
May produce something like:
file: test.css, content:
> Tue Apr 26 22:16:02 CEST 2016
file: test.php, content:
> Tue Apr 26 22:16:03 CEST 2016
NewFile: test-349b.css, content:
>+ Tue Apr 26 22:16:05 CEST 2016
file: test.css, content:
> Tue Apr 26 22:16:08 CEST 2016
file: test.css, content:
> Tue Apr 26 22:16:10 CEST 2016
file: test.css, content:
> Tue Apr 26 22:16:13 CEST 2016
Watching for new files AND new lines in old files, using bash
There is another solution by using some bashisms like associative arrays:
Sample:
wpath=/var/log
while : ;do
while read -a crtfile ;do
if [ "${crtfile:0:1}" = "-" ] &&
[ "${crtfile[8]##*.}" != "gz" ] &&
[ "${files[${crtfile[8]}]:-0}" -lt ${crtfile[4]} ] ;then
printf "\e[47m## %-14s :- %(%a %d %b %y %T)T ##\e[0m\n" ${crtfile[8]} -1
tail -c +$[1+${files[${crtfile[8]}]:-0}] $wpath/${crtfile[8]}
files[${crtfile[8]}]=${crtfile[4]}
fi
done < <( /bin/ls -l $wpath )
sleep 1
done
This will dump each files (with filename not ending by .gz) in /var/log, and watch for modification or new files, then dump new lines.
Demo:
In a first terminal console, hit:
ext=(php css other)
( while :; do
subname=''
((RANDOM%10)) || printf -v subname -- "-%04x" $RANDOM
name=test$subname.${ext[RANDOM%3]}
printf "%-16s" $name
{
date +"%a %d %b %y %T" | tee /dev/fd/5
fortune /usr/share/games/fortunes/bofh-excuses
} >> /tmp/$name
sleep 1
done ) 5>&1
You need to have fortune installed with BOFH excuses librarie.
If you really not have fortune, you could use this instead:
LANG=C ext=(php css other)
( while :; do
subname=''
((RANDOM%10)) || printf -v subname -- "-%04x" $RANDOM
name=test$subname.${ext[RANDOM%3]}
printf "%-16s" $name
{
date +"%a %d %b %y %T" | tee /dev/fd/5
for ((1; RANDOM%5; 1))
do
printf -v str %$[RANDOM&12]s
str=${str// /blah, }
echo ${str%, }.
done
} >> /tmp/$name
sleep 1
done ) 5>&1
This may output something like:
test.css Thu 28 Apr 16 12:00:02
test.php Thu 28 Apr 16 12:00:03
test.other Thu 28 Apr 16 12:00:04
test.css Thu 28 Apr 16 12:00:05
test.css Thu 28 Apr 16 12:00:06
test.other Thu 28 Apr 16 12:00:07
test.php Thu 28 Apr 16 12:00:08
test.css Thu 28 Apr 16 12:00:09
test.other Thu 28 Apr 16 12:00:10
test.other Thu 28 Apr 16 12:00:11
test.php Thu 28 Apr 16 12:00:12
test.other Thu 28 Apr 16 12:00:13
In a second terminal console, hit:
declare -A files
wpath=/tmp
while :; do
while read -a crtfile; do
if [ "${crtfile:0:1}" = "-" ] && [ "${crtfile[8]:0:4}" = "test" ] &&
( [ "${crtfile[8]##*.}" = "css" ] || [ "${crtfile[8]##*.}" = "php" ] ) &&
[ "${files[${crtfile[8]}]:-0}" -lt ${crtfile[4]} ]; then
printf "\e[47m## %-14s :- %(%a %d %b %y %T)T ##\e[0m\n" ${crtfile[8]} -1
tail -c +$[1+${files[${crtfile[8]}]:-0}] $wpath/${crtfile[8]}
files[${crtfile[8]}]=${crtfile[4]}
fi
done < <(/bin/ls -l $wpath)
sleep 1
done
This will each seconds
for all entries in watched directory
search for files (first caracter is -),
search for filenames begining by test,
search for filenames ending by css or php,
compare already printed sizes with new file size,
if new size greater,
print out new bytes by using tail -c and
store new already printed size
sleep 1 seconds
this may output something like:
## test.css :- Thu 28 Apr 16 12:00:09 ##
Thu 28 Apr 16 12:00:02
BOFH excuse #216:
What office are you in? Oh, that one. Did you know that your building was built over the universities first nuclear research site? And wow, aren't you the lucky one, your office is right over where the core is buried!
Thu 28 Apr 16 12:00:05
BOFH excuse #145:
Flat tire on station wagon with tapes. ("Never underestimate the bandwidth of a station wagon full of tapes hurling down the highway" Andrew S. Tannenbaum)
Thu 28 Apr 16 12:00:06
BOFH excuse #301:
appears to be a Slow/Narrow SCSI-0 Interface problem
## test.php :- Thu 28 Apr 16 12:00:09 ##
Thu 28 Apr 16 12:00:03
BOFH excuse #36:
dynamic software linking table corrupted
Thu 28 Apr 16 12:00:08
BOFH excuse #367:
Webmasters kidnapped by evil cult.
## test.css :- Thu 28 Apr 16 12:00:10 ##
Thu 28 Apr 16 12:00:09
BOFH excuse #25:
Decreasing electron flux
## test.php :- Thu 28 Apr 16 12:00:13 ##
Thu 28 Apr 16 12:00:12
BOFH excuse #3:
electromagnetic radiation from satellite debris
Nota: If some file are modified more than one time between two checks, all modification will be printed on next check.
Although not really nice, the following gives (and repeats) the last 50 lines of the newest file in the current directory:
while true; do tail -n 50 $(ls -Art | tail -n 1); sleep 5; done
You can refresh every minute using a cronjob:
$crontabe -e
* * * * * /home/script.sh
if you need to refresh in less than a minute you can use the command "sleep" inside your script.

Bash script not producing desired result

I am running a cron-ed bash script to extract cache hits and bytes served per IP address. The script (ProxyUsage.bash) has two parts:
(uniqueIP.awk) find unique IPs and create a bash script do add up the hits and bytes
run the hits and bytes per IP
ProxyUsage.bash
#!/usr/bin/env bash
sudo gawk -f /home/maxg/scripts/uniqueIP.awk /var/log/squid3/access.log.1 > /home/maxg/scripts/pxyUsage.bash
source /home/maxg/scripts/pxyUsage.bash
uniqueIP.awk
{
arrIPs[$3]++;
}
END {
for (n in arrIPs) {
m++; # count arrIPs elements
#print "Array elements: " m;
arrAddr[i++] = n; # fill arrAddr with IPs
#print i " " n;
}
asort(arrAddr); # sort the array values
for (i = 1; i <= m; i++) { # write one command line per IP address
#printf("#!/usr/bin/env bash\n");
printf("sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=%s /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt\n", arrAddr[i])
}
}
pxyUsage.bash
sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=192.168.1.13 /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt
sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=192.168.1.14 /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt
sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=192.168.1.22 /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt
TheProxyUsage.bash script runs as scheduled and creates the pxyUsage.bash script.
However the pxyUsage.text file is not amended with the latest values when the script runs.
So far I run pxyUsage.bash every day myself, as I cannot figure out, why the result is not written to file.
Both bash scripts are set to execute. Actually the file permissions are below:
-rwxr-xr-x 1 maxg maxg 169 Mar 14 08:40 ProxySummary.bash
-rw-r--r-- 1 maxg maxg 910 Mar 15 17:15 proxyUsage.awk
-rwxrwxrwx 1 maxg maxg 399 Mar 17 06:10 pxyUsage.bash
-rw-rw-rw- 1 maxg maxg 2922 Mar 17 07:32 pxyUsage.txt
-rw-r--r-- 1 maxg maxg 781 Mar 16 07:35 uniqueIP.awk
Any hints appreciated. Thanks.
The sudo(8) command requires a pseudo-tty and you do not have one allocated under cron(8); you do have one allocated when logged in the usual way.
Instead of mucking about with sudo(8), just run the script as the correct user.
If you cannot do that, then in the root crontab, do something like this:
su - username /path/to/mycommand arg1 arg2...
This will work because root can use su(1) without neding a password.

Resources