Monitoring and settings of transfers using the Linux AF_UNIX? - linux

I'm trying to setup a robust AF_UNIX data transfer.
So I make this test (on a Ubuntu 20.04):
I generate logs :
for a in `seq 1 1000000`; do echo $a | logger; echo $a ; done
I check my log file :
tail -f /var/log/syslog | less
I can see my logs logged by rsyslog in this file.
I stop rsyslog :
systemctl stop syslog.socket
No logs arrived anymore in /var/log/syslog as expected.
But when I restart rsyslog :
systemctl start syslog.socket
I can see that the logs arrvie again but most of the one created during the rsyslog outage are missing.
I tired to increase the max_dgram_qlen
echo 5000000 > /proc/sys/net/unix/max_dgram_qlen
but it doesn't change anything.
I read on this post What is the max size of AF_UNIX datagram message in Linux? that I can monitor the socket using:
ss -ax | grep log
but the counters looked to be at zero all the time whether rsyslog is started or not.
What I trying to do :
increase the buffer to be able to have at least 500MB or 500,000 messages in queue
be able to monitor the size of the queue : size or queue length
if possible logging in dmesg when messages are dropped.

Related

How to log the live output of a running process

I want to run a game server inside my Ubuntu machine. I want to run it in the background and write the live output of that process inside a log file. I tried using nohup and running the game server using "&" at the end but I couldn't make it work the way I wanted.
Then I started reading about named pipes and actually gave it a go. I made a simple script that in theory should work. But, of course I am missing something.
First, I made a pipe using the mkfifo command.
mkfifo testpipe
Then I created a small script:
#!/bin/bash
./mta-server64 > pipe &
pid=$!
echo $pid // so I know the pid of the process
cat < pipe > log.txt &
(Note: I wrote this code from memory.)
The code works only when there is an error and the process stops. It actually records the game console error. But when the game server is running I get no output in the log file.
I want to read the output (stdout and stderr if I am not mistaken) of a process running in background and record it those inside a log file.
I also thought about using screen as it logs everything inside a file but I would prefer not using it if there is a better solution.
EDIT:
First of all: thank you for the interest you had in helping me. In the same way, I have to apologize for only giving scarce details about what I intend to do with this small project and for my limited understanding of stdout and stderr.
Let's go to the first base.
I want to run a game server named Multi Theft Auto (https://multitheftauto.com/). This is GTA San Andreas but multiplayer.
I can easily run this game server in my Ubuntu server by calling the executable ./mta-server-64. After calling it the game server console appears:
[|] MTA: San Andreas :: 0/32 players :: 196 resources :: 125 fps (25)
MTA:BLUE Server for MTA:SA
==================================================================
= Multi Theft Auto: San Andreas v1.5.6 [64 bit]
==================================================================
= Server name : Default MTA Server
= Server IP address: auto
= Server port : 22884
=
= Log file : /root/mta/mods/deathmatch/logs/server.log
= Maximum players : 32
= HTTP port : 22564
= Voice Chat : Disabled
= Bandwidth saving : Medium
==================================================================
[09:49:07] Resource 'mapmanager' requests some acl rights. Use the command 'aclrequest list mapmanager'
[09:49:07] Resources: 196 loaded, 0 failed
[09:49:07] Starting resources...
[09:49:07] Server minclientversion is now 1.5.6-9.16588.0
[09:49:07] INFO: MAPMANAGER: Some important ACL permissions are missing. To ensure the correct functioning of Mapmanager, please write: aclrequest allow mapmanager all
[09:49:07] Gamemode 'play' started.
[09:49:07] Authorized serial account protection is enabled for the ACL group(s): `Admin` See http://mtasa.com/authserial
[09:49:07] WARNING: <owner_email_address> not set
[09:49:07] Server started and is ready to accept connections!
[09:49:07] To stop the server, type 'shutdown' or press Ctrl-C
[09:49:07] Type 'help' for a list of commands.
[09:49:07] Querying MTA master server... success! (Auto detected IP:xxx.xxx.xxx.xxx)
I am using the following script to run the process in the background and (try to) get the live output from:
#!/bin/bash
newport=$(shuf -i 22003-22900 -n 1)
newip=$(shuf -i 22003-22900 -n 1)
rm -rf ~/server/*
cp -r /home/user*/ftp/server/mtaserver/serverfiles/* ~/server
sed -i "s/<httpport>[0-9][0-9][0-9][0-9][0-9]<\/httpport>/<httpport>$newport<\/httpport>/g" ~/server/mods/deathmatch/mtaserver.conf
sed -i "s/<serverport>[0-9][0-9][0-9][0-9][0-9]<\/serverport>/<serverport>$newip<\/serverport>/g" ~/server/mods/deathmatch/mtaserver.conf
~/server/mta-server64 2>&1 | tee -a outfile &
mta_pid=$!
echo $mta_pid
sleep 6
pkill $mta_pid
(Note: Because of some technical problems I had to add the first few lines of script which automatically replace the game files with new ones and also replace the existing ports with random ones.)
This script starts the server and tries to log the output of the process. The process is automatically killed after few seconds so there is only one instance of the game server at any given time.
THE ISSUE:
This script only logs the output if there is an error. I still cannot get the live output of the process when it is still running. Maybe this is an issue with the game server but truly believe there should be a way to make it work the way I intend.
I believe you want to use tee command to split the pipe output to log file.
I suggest you read this article and these answers 1 2.
Usually this is enough nohup somecommand > somecommand.log 2>&1 & then, tail -F somecommand.log to follow the logs.
After 2 days I finally figured out a way to make it work (the way I intended to work, without taking in consideration any major security/performance risks).
Reading the comments made me realize I was attacking the wrong point. The stdout of the game server is buffered, thus making it impossible to log it into a log file using the methods I tried when I posted my question At least this is what I came to understand).
I did some research on how to run the application without having the stdout buffered: https://serverfault.com/questions/294218/is-there-a-way-to-redirect-output-to-a-file-without-buffering-on-unix-linux
My code now:
stdbuf -o0 ~/server/mta-server64 >> pipe &
cat < pipe | tee -a outfile &
After creating the named pipe it executes the game server inside that pipe and then appends the stdout into the log file.
The stdbug -o0 command disables the stdout buffering (as noted in the link above).
This works for me and I cannot guarantee it will work for anybody else. I am still not aware if disabling the buffering is a safe approach to my issue but for now it is what I need.

How to keep service script output from clogging up "messages"

I have a service on Centos7 that runs a script in /usr/local/sbin/restarthelp2.sh and outputs a tunnel check by checking the status of network connection. The output of this ends up in /var/log/messages and makes the file huge. I already have the output being sent to its own log file, how do I keep the output results from the script/service out of the "messages" file?
[Unit]
Description=CHECK the wlan
[Service]
Type=simple
ExecStart=/usr/local/sbin/restarthelp2.sh
[Install]
WantedBy=default.target
Code for the script mentioned above:
#!/bin/bash
while true;
do
status=$(</sys/class/net/wlan0/operstate)
tunstate=$(</sys/class/net/tun0/carrier)
now=$(date)
if [ $status == up ] && [ $tunstate -eq 1 ];
then
echo "everything was good at $now, tunnel status was $tunstate" >> /var/log/wlancheck.log
echo "tunnel status is UP"
sleep 10
fi
done
You can add to your [Service] section of the Unit the line
StandardOutput=null
so that this output is not logged to the journal, and from there to syslog.
For other values see man systemd.exec.
If you are using rsyslogd you can filter messages much later, just before they get put in /var/log/messages. Remove the above Unit line to get back to normal logging. Look for a file like /etc/rsyslog.conf and a line like
*.info;... /var/log/messages
Add in front of this line a filter that compares a property with what you want to suppress, and use the action stop, for example one of:
if $programname startswith "restarthelp" then stop
if $msg contains 'tunnel status is UP' then stop
There is extensive rsyslog documentation, but it is hard to follow as there are many old formats that are still supported, so you must be careful not to mix them up.
If you also change the Unit StandardOutput=null to StandardOutput=syslog, you will no longer get the messages logged in the systemd journal, and they will go straight to rsyslogd. I don't know if this will provide you with the status information you wanted though.

Best method to output log content to listening port

I am outputting content of a log via netcat to an application over the network. I don't know if what I'm doing is the most efficient, especially when I notice the netcat session becomes non-responsive. I have to stop netcat and start it again for the application to work again.
The command I run is:
/bin/tail -n1 -f /var/log/custom_output.log | /bin/nc -l -p 5020 --keep-open
This needs to run like this 24/7. Is this the most efficient way of doing it? How can I improve on it so I don't have to restart the process daily?
EDIT
So I realised that when the log is being rotated, netcat is locked onto a file that's not longer being written to. I can deal with this easily enough.
The question still stands. Is this the best way to do something like this?
It's been 6 years, but maybe someone will come in handy.
To account for log rotation, use tail with the -F flag.
nc (aka netcat) variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
tail -n0 -F "$LOG_FILE" | nc -k -l -p $PORT
Notes:
Flag -k in nc is analog to --keep-open in "the OpenBSD rewrite of netcat";
Multiple clients can connect to nc at the same time, but only the first one will be receive appended log lines;
tail will run immediately, so it will collect appended log lines even if no client is connected. Thus, the first client can receive some buffered data - all log lines that have been appended since tail was run.
socat variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
socat TCP-LISTEN:$PORT,fork,reuseaddr SYSTEM:"tail -n0 -F \"$LOG_FILE\" </dev/null"
Note: here socat will fork (clone itself) on each client connection and start a separate tail process. Thus:
Each connected client will receive appended log lines at the same time;
Clients will not receive any previously buffered by tail strings.
additional
You can redirect stderr to stdout in the tail process by adding 2>&1 (in both variants). In this case, clients will receive auxiliary message lines, e.g.:
tail: /var/log/custom_output.log: file truncated;
tail: '/var/log/custom_output.log' has become inaccessible: No such file or directory - printed when the log file has been removed or renamed, only if -F is used;
tail: '/var/log/custom_output.log' has appeared; following new file - printed when a new log file is created, only if -F is used.

What happens to the new syslog messages when rsyslogd daemon is stopped?

I tried to search this in many places and also documents/man pages of openlog(), syslog(0, rsyslogd(8) but couldn't find answer for this.
My question is, if rsyslogd is stopped or not yet started, then do the new syslog messages get lost? Or rsyslogd fetches them from /dev/log later when it's enabled?
My test is:
On a running system, rsyslog is running. Now, do the following:
logger -p local7.notice "my custom message1"
grep message1 | /var/log/messages ----> Success
Stop rsyslogd process
logger -p local7.notice "My other custom message2"
now, start the rsyslogd daemon
grep message2 | /var/log/messages ----> FAIL
I understand from openlog(3) and syslog(3) man pages that a socket is opened for /dev/log file and if there is an error while sending the message to syslog (as rsyslogd is not running) then the connection is closed (and message is printed on console/stderror if you have used LOG_CONS/LOG_PERROR).
Could anybody please tell me:
Is there any way rsyslogd to get all those messages came in absence of it in syslog file when it comes up?
If not by default, is there any syscall, command,etc.etc.way to do that??
Thank you in advance.
-Neo
It won't happen by default. You can use the 'cat' command and pipe it to logger to get them in, though. Something like the following should work.
cat your.log | logger -n yourserver
You can also use the 'tail' command similarly to 'cat'.

Linux: Read string from a file and execute commands in other scipt

I'm a newbie to Linux/coding/scripting.
I currently have a script to start services of OBIEE application on RHEL 5.5.This is a sample from my script:
case "$1" in
start)
echo -e "Starting Node Manager..."
$ORACLE_FMW/wlserver_10.3/server/bin/startNodeManager.sh > startNodemanager.log 2>&1 &
sleep 30
echo -e "Starting Weblogic Server...."
$ORACLE_FMW/user_projects/domains/bifoundation_domain/bin/startWebLogic.sh > startWeblogic.log 2>&1 &
As you can see I'm trying to start 2 services one after the other using timeframe gap of 30seconds, independent of whether 1st service (Node Manager) starts or fails.
Instead of using static time gap in the script, I want to execute/start 2nd service(weblogic), based on output(of startNodemanager.log) from 1st service(NodeManager).
When NodeManager starts successfully, it ends its log file with certain string. EX:
"INFO: Secure socket listener started on port 9556"
So is it possible to write some command in my script (in place of time frame) that reads this string from output log and executes 2nd service only upon receiving desired string, till then holding off execution of 2nd service.
Thanks.
=======================
EDIT:
I have updated the script as suggested by yingted below.
It did not fix my issue yet. read is holding off triggering the 2nd service but it's failing to trigger it even after desired message is recorded in the log. My updated script looks like this using your command:
case "$1" in
start)
echo -e "Starting Node Manager..."
$ORACLE_FMW/wlserver_10.3/server/bin/startNodeManager.sh > startNodemanager.log 2>&1 &
read -r < <(tail -f startNodemanager.log | grep --line-buffered -Fx -- 'INFO: Secure socket listener started on port 9556')
echo -e "Starting Weblogic Server...."
$ORACLE_FMW/user_projects/domains/bifoundation_domain/bin/startWebLogic.sh > startWeblogic.log 2>&1 &
Problem might be with the message in the log.
Actually the message 'INFO: Secure socket listener started on port 9556' is preceded with a time stamp in the log.
Is there anyway I could add timestamp as wild card entry?
Your second process should follow the first one.
read -r < <(tail -f startNodemanager.log | grep --line-buffered 'INFO: Secure socket listener started on port 9556$')
The read command waits until startNodemanager.log contains a line ending in INFO: Secure socket listener started on port 9556.
read also accepts a -t timeout flag, which exits and sets $? greater than 128 if the timeout is exceeded. If it instead succeeds, read returns 0.

Resources