I've set up Sendmail so that all messages are delivered to /dev/null instead of being actually stored anywhere else. I'm trying to reduce the number of unecessary disk writes and since those messages are essentially removed I want to, if possible, skip writing them to mqueue. Is there any way to do that?
The closest I could think of is mounting a nullfs filesystem on the mqueue directory, but I'd like a "cleaner" approach using sendmail only. Is this possible?
Thanks!
Most likely you choose wrong way to solve your problem but anyway:
You can select discard mailer for all recipients in check_rcpt (Local_check_rcpt) rule set. It will act as equivalent of DISCARD in access table.
Add the following lines to sendmil.mc file, generate new sendmail.cf file and restart or HUP sendmail daemon.
LOCAL_RULESETS
SLocal_check_rcpt
# PUT TAB (\t) BEFORE $# !!!
R$* $#discard $: discard
Related
The following command does not append but replaces the content
echo 0 >> /sys/block/nvme0n1/queue/nomerges
I don't want to replace but append. But I'm curious Is there something special about this file?
It also doesn't allow more than one character as its input.
Look at https://serverfault.com/questions/865787/what-does-the-nomerge-mean-in-linux-system
It might help you in understanding, that there are only 3 options that the file can take.
Also:
nomerges enables the user to disable the lookup logic involved with IO
merging requests in the block layer. By default (0) all merges are
enabled. When set to 1 only simple one-hit merges will be tried. When
set to 2 no merge algorithms will be tried (including one-hit or more
complex tree/hash lookups).
There is quite a common issue in unix world, that is when you start a process with parameters, one of them being sensitive, other users can read it just by executing ps -ef. (For example mysql -u root -p secret_pw
Most frequent recommendation I found was simply not to do that, never run processes with sensitive parameters, instead pass these information other way.
However, I found that some processes have the ability to change the parameter line after they processed the parameters, looking for example like this in processes:
xfreerdp -decorations /w:1903 /h:1119 /kbd:0x00000409 /d:HCG /u:petr.bena /parent-window:54526138 /bpp:24 /audio-mode: /drive:media /media /network:lan /rfx /cert-ignore /clipboard /port:3389 /v:cz-bw47.hcg.homecredit.net /p:********
Note /p:*********** parameter where password was removed somehow.
How can I do that? Is it possible for a process in linux to alter the argument list they received? I assume that simply overwriting the char **args I get in main() function wouldn't do the trick. I suppose that maybe changing some files in /proc pseudofs might work?
"hiding" like this does not work. At the end of the day there is a time window where your password is perfectly visible so this is a total non-starter, even if it is not completely useless.
The way to go is to pass the password in an environment variable.
I would like to capture all the commands fired by a user in a session. This is needed for the purpose of auditing.
I used some thing like below,
LoggedIn=`date +"%B-%d-%Y-%M:%H"`
HostName=`hostname`
UNIX_USER=`who am i | cut -d " " -f 1`
echo " Please enter a Change Request Number for which you are looging in : "
read CR_NUMBER
FileName=$HostName-$LoggedIn-$CR_NUMBER-$UNIX_USER
script $FileName
I have put this snippet in .profile file, so that as soon as the user logs in to a SU account this creates the file. The plan is to push this file to a central repository where an auditor can look into those files.
But there are couple of problems in this.
The "script" command spools all the data from the session, for example say, a user cats a property file, It appends all the data of the property file to the auditing file.
Unless user fires the 'exit' command, the data will not be spooled to auditing file, by any chance if user logs out with out firing exit command, the auditing file will be empty.
Is there any better solution for auditing ? History file is not an option since it does not tell me for which Change Request number ( internal to my organisation) the commands are fired. Is there any other way just capture only the commands fired but not the output ?
Some of the previous discussion are here and here
I think this software exactly matches your need:
https://github.com/a2o/snoopy
I'd like to get some ideas from you on how to implement that. Let me explain a little bit my problem:
Scenario:
We have a system that must have some especific ACLs set in order to run it. So, before running it would be great if I could run a sort of pre check in order to verify if everything was set correctly.
Goal:
Create a script that checks those ACLs before starting the system alerting in case one of them is wrong based in a list of files/folder and its ACLs.
Problems:
Since the getfacl result is not a simple return, the only way I found to do such checking was parsing the result and analising each piece of it, that not as elegant as I'd like it could be.
I doubt many of you had to do something ACLs check but for sure you guys can contribute to my cause :)
Thanks everybody in advance
How about using Python module pylibacl
>>> import posix1e
>>> acl1 = posix1e.ACL(file="file1.txt")
>>> print acl1
user::rw-
group::r--
other::r--
Since the getfacl result is not a simple return, the only way I found to do such checking was parsing the result and analising each piece of it, that not as elegant as I'd like it could be.
What exactly are you trying to do? If you're just comparing the result of calling getfacl to a desired ACL, it should be easy. For example, assuming that you have stored your desired ACL in a file named acl-i-want, you could do something like this:
getfacl /path > acl-i-have
if ! diff -q acl-i-have acl-i-want; then
echo "ACLs are different."
fi
I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)