Grep some information from log files distributed on different nodes - linux

I need a grep command that auto logs in all the required nodes connected to that host using some for loop and display the result on the host screen without saving the file in the host. All I need to change every time is the grepping info and required nodes to grep.
This is what I tried:
for i in <Nodename>{1..5}; do
echo $i
ssh -q $i "cd <path>;grep '<string>' <filename>"
done
For example, if the node name is ca02p3zsynh001, then the log file name will be <filename>.log_ca02p3zsynh001_20171204_001316.gz. The last two fields are date time of 00 hrs 13 min and 16 sec.

Related

Bash or systemd method to run shell script continuously after current task is finished

I'm using a program to scan huge IP segment (/8 IP address) and save the output in Linux directory.
To run the program, I'm writing a script namely scan.sh that look like this
#!/bin/sh
while IFS= read -r IP || [[ -N "$IP" ]]; do
/path/to/program $IP > /output/path/$IP.xml;
done < ip.txt
Since the IP address is so huge, it is not feasible to fit all IP in one file so I split it by chunk and
run the script concurrently like this. Let's name this allscan.sh
#!/bin/bash
/path/script/0-4/scan.sh & /path/script/5-10/scan.sh /path/script/11-15/scan.sh
Obviously, the script includes codes that's much longer than above but you get the idea. 0-4, 5-10 folder representing ip address that split into small chunk.
During first run, it took 28 days to finish.
My question is how to keep it running continuously once current task is finished?
I don't think monthly cronjob is suitable for this because if the task took more than 30 days to finish it will rerun the script and create unnecessary load to the server.

Copying updated files from one server to another after 15 min

I want to copy updated file from one server to another every 15 min when the new file gets generated. I have written code using expect script. It works fine but after 15 min it copies all the files in the directory i.e. it replaces and copy latest one also. I want only updated file (updated every 15 min) to get copied and not all the files.
Here is my script:
while :
do
expect -c "spawn scp -P $Port sftpuser#$IP_APP:/mnt/oam/PmCounters/LBO* Test/;expect \"password\";send \"password\r\";expect eof"
sleep 900
done
can I use rsync or any other approach and how?
rsync does only copy changed or new file by default. Use for example that syntax:
rsync -avz -e ssh remoteuser#remotehost:/remote/dir /local/dir/
That specifies ssh as remote shell to use (-e ssh ...), -a activates archive mode, -v sets verbose output and -z compresses the transfer.
You could run that every 15 minutes by a cronjob.
For the password you can use the $RSYNC_PASSWORD environment variable or the --password-file flag.

Linux: Read string from a file and execute commands in other scipt

I'm a newbie to Linux/coding/scripting.
I currently have a script to start services of OBIEE application on RHEL 5.5.This is a sample from my script:
case "$1" in
start)
echo -e "Starting Node Manager..."
$ORACLE_FMW/wlserver_10.3/server/bin/startNodeManager.sh > startNodemanager.log 2>&1 &
sleep 30
echo -e "Starting Weblogic Server...."
$ORACLE_FMW/user_projects/domains/bifoundation_domain/bin/startWebLogic.sh > startWeblogic.log 2>&1 &
As you can see I'm trying to start 2 services one after the other using timeframe gap of 30seconds, independent of whether 1st service (Node Manager) starts or fails.
Instead of using static time gap in the script, I want to execute/start 2nd service(weblogic), based on output(of startNodemanager.log) from 1st service(NodeManager).
When NodeManager starts successfully, it ends its log file with certain string. EX:
"INFO: Secure socket listener started on port 9556"
So is it possible to write some command in my script (in place of time frame) that reads this string from output log and executes 2nd service only upon receiving desired string, till then holding off execution of 2nd service.
Thanks.
=======================
EDIT:
I have updated the script as suggested by yingted below.
It did not fix my issue yet. read is holding off triggering the 2nd service but it's failing to trigger it even after desired message is recorded in the log. My updated script looks like this using your command:
case "$1" in
start)
echo -e "Starting Node Manager..."
$ORACLE_FMW/wlserver_10.3/server/bin/startNodeManager.sh > startNodemanager.log 2>&1 &
read -r < <(tail -f startNodemanager.log | grep --line-buffered -Fx -- 'INFO: Secure socket listener started on port 9556')
echo -e "Starting Weblogic Server...."
$ORACLE_FMW/user_projects/domains/bifoundation_domain/bin/startWebLogic.sh > startWeblogic.log 2>&1 &
Problem might be with the message in the log.
Actually the message 'INFO: Secure socket listener started on port 9556' is preceded with a time stamp in the log.
Is there anyway I could add timestamp as wild card entry?
Your second process should follow the first one.
read -r < <(tail -f startNodemanager.log | grep --line-buffered 'INFO: Secure socket listener started on port 9556$')
The read command waits until startNodemanager.log contains a line ending in INFO: Secure socket listener started on port 9556.
read also accepts a -t timeout flag, which exits and sets $? greater than 128 if the timeout is exceeded. If it instead succeeds, read returns 0.

Instance limited cron job

I want to run a cron job every minute that will launch a script. Simple enough there. However, I need to make sure that not more than X number (defined in the script) of instances are ever running. These are queue workers, so if at any minute interval 6 workers are still active, then I would not launch another instance. The script simply launches a PHP script which exits if no job available. Right now I have a shell script that perpetually launches itself every 10 seconds after exit... but there are long periods of time where there are no jobs, and a minute delay is fine. Eventually I would like to have two cron jobs for peak and off-peak, with different intervals.
Make sure you have unique script name.
Then check if 6 instances are already running
if [ $(pgrep '^UNIQUE_SCIPT_NAME$' -c) -lt 6 ]
then
# start my script
else
# do not start my script
fi
I'd say that if you want to iterate as often as every minute, then a process like your current shell script that relaunches itself is what you actually want to do. Just increase the delay from 10 seconds to a minute.
That way, you can also more easily control your delay for peak and off-peak, as you wanted. It would be rather elegant to simply use a shorter delay if the script found something to do the last time it was launched, or a longer delay if it did not find anything.
You could use a script like OneAtATime to guard against multiple simultaneous executions.
This is what i am using in my shell scripts:
echo -n "Checking if job is already running... "
me=`basename $0`
running=$(ps aux | grep ${me} | grep -v .log | grep -v grep | wc -l)
if [ $running -gt 1 ];
then
echo "already running, stopping job"
exit 1
else
echo "OK."
fi;
The command you're looking for is in line 3. Just replace $(me) with your php script name. In case you're wondering about the grep .log part: I'm piping the output into a log file, whose name partially contains the script name, so this way i'm avoiding it to be double-counted.

RRD print the timestamp of the last valid data

I have a rdd database storing ping response from a wide range of network equipments
How can i print on the graph the timestamp of the last valid entry in the rrd database, so i can see if a host is down when did it went down
I use the folowing to creade the RRD file.
rrdtool create terminal_1.rrd -s 60 \
DS:ping:GAUGE:120:0:65535 \
RRA:AVERAGE:0.5:1:2880
Use the lastupdate option of rrdtool.
Another solution exists if you only have one file per host : don't update your RRD if the host is down. You can then see the last updated time with a plain ls or stat as in :
ls -l terminal_1.rrd
stat --format %Y terminal_1.rrd
In case you plan to use the caching daemon of RRD, you have to use the last command in order to flush the pending updates.
rrdtool last terminal_1.rrd

Resources