How to delete files after using grep function - linux

I have the command below:
grep -rnw '/root/serviceDown/' -e "The service 'httpd' on server is currently down"
and the result is as follows:
/root/serviceDown/2946/000.conf:5:subject=The service 'httpd' on server is currently down
/root/serviceDown/2955/000.conf:5:subject=The service 'httpd' on server is currently down
How to write a script which deletes those files after the grep command and then restarts the server?

This probably is what you are looking for:
grep -lr "The service 'httpd' on server is currently down" /root/serviceDown/ 2>/dev/null | xargs rm
The -n and -w flags do not really make sense for your purpose, the additional information they produce is only in the way here. The -e flag is also not required as far as I can tell, you do not need an extended pattern interpretation for the string you use. The -l flag reduces the output to the name of matching files. You filter out the error output using 2>/dev/null and finally pipe the resulting list of files into the xargsutility which uses a simple rm command to delete the files.
Restarting the server process afterwards can be done by whatever command you usually use for that, just execute it after the above command, either manually or separated by a simple ; in one go.
Obviously you need sufficient system permission to be able to perform both commands...
For more advanced processing of the result as you ask in your comment below I suggest you implement a simple script. This offers much more flexibility, is easier to read and maintain and also allows execution as a single command.
This might be a starting point for you:
#!/bin/bash
# fetch list of matching files
list=`grep -lr "The service 'httpd' on server is currently down" /root/serviceDown/ 2>/dev/null`
if [[ -z "$list" ]]; then
echo "No files matched, nothing to be done...";
exit
fi
# delete files one by one
for match in $list
do
echo "Removing matched file $match..."
echo `rm $match`
done
# restart server process
echo "Restarting server process..."
`service httpd restart`
# that's is, basically
echo "...done."
Save that script into some folder inside your PATH environment variable (e.g./root/bin/restartFailedHttpdServer), make it executable (chmod u+x /root/bin/restartFailedHttpdServer) and finally execute it (restartFailedHttpdServer).

Related

How to use if function in shell scripts?

I need to use if function to filter out required files only when using SFTP to copy files to my server from a remote server. Here is my try to get the all data inside /filesnew.
#!/bin/bash
files=`sshpass -p 'XXX' sftp -P 2222 User1#10.18.90.12<<EOF
cd /filesnew
ls
EOF`
files=`echo $files|sed "s/.*sftp> ls//"`
(
echo cd /filesnew
for file in $files; do
echo get $file /data/processedfiles/$file
done
) |sshpass -p 'XXX' sftp -P 2222 User1#10.18.90.12
I need to filter out the files which are starting with "USER".
ex:
If($files==*USER*) then
echo get $file /data/processedfiles/$file
Can someone show me how to do this?
Use spaces around operators. Those are all arguments for commands and spaces separate them.
"If" is spelled if (lowercase) in Bash.
Testing a condition is done with [...] in Bash, not with (...).
Filtering is not comparison. Those are completely different operations. Use grep:
... | grep -E -v '^USER'
See: man grep

When piping in BASH, is it possible to get the PID of the left command from within the right command?

The Problem
Given a BASH pipeline:
./a.sh | ./b.sh
The PID of ./a.sh being 10.
Is there way to find the PID of ./a.sh from within ./b.sh?
I.e. if there is, and if ./b.sh looks something like the below:
#!/bin/bash
...
echo $LEFT_PID
cat
Then the output of ./a.sh | ./b.sh would be:
10
... Followed by whatever else ./a.sh printed to stdout.
Background
I'm working on this bash script, named cachepoint, that I can place in a pipeline to speed things up.
E.g. cat big_data | sed 's/a/b/g' | uniq -c | cachepoint | sort -n
This is a purposefully simple example.
The pipeline may run slowly at first, but on subsequent runs, it will be quicker, as cachepoint starts doing the work.
The way I picture cachepoint working is that it would use the first few hundred lines of input, along with a list of commands before it, in order to form a hash ID for the previously cached data, thus breaking the stdin pipeline early on subsequent runs, resorting instead to printing the cached data. Cached data would get deleted every hour or so.
I.e. everything left of | cachepoint would continue running, perhaps to 1,000,000 lines, in normal circumstances, but on subsequent executions of cachepoint pipelines, everything left of | cachepoint would exit after maybe 100 lines, and cachepoint would simply print the millions of lines it has cached. For the hash of the pipe sources and pipe content, I need a way for cachepoint to read the PIDs of what came before it in the pipeline.
I use pipelines a lot for exploring data sets, and I often find myself piping to temporary files in order to bypass repeating the same costly pipeline more than once. This is messy, so I want cachepoint.
This Shellcheck-clean code should work for your b.sh program on any Linux system:
#! /bin/bash
shopt -s extglob
shopt -s nullglob
left_pid=
# Get the identifier for the pipe connected to the standard input of this
# process (e.g. 'pipe:[10294010]')
input_pipe_id=$(readlink "/proc/self/fd/0")
if [[ $input_pipe_id != pipe:* ]]; then
echo 'ERROR: standard input is not a pipe' >&2
exit 1
fi
# Find the process that has standard output connected to the same pipe
for stdout_path in /proc/+([[:digit:]])/fd/1; do
output_pipe_id=$(readlink -- "$stdout_path")
if [[ $output_pipe_id == "$input_pipe_id" ]]; then
procpid=${stdout_path%/fd/*}
left_pid=${procpid#/proc/}
break
fi
done
if [[ -z $left_pid ]]; then
echo "ERROR: Failed to set 'left_pid'" >&2
exit 1
fi
echo "$left_pid"
cat
It depends on the fact that, on Linux, for a process with id PID the path /proc/PID/fd/0 looks like a symlink to the device connected to the standard input of the process and /proc/PID/fd/1 looks like a symlink to the device connected to the standard output of the process.

Execute Command In Multiple Directories

I have a large set of single install WordPress sites on my Linux server. I want to create text files that contain directory names for groups of my sites.
For instance all my sites live in /var/www/vhosts and I may want to group a set of 100 websites in a text file such as
site1
site2
site3
site4
How can I write a script that will loop through only the directories specified in the group text files and execute a command. My goal is to symlink some of the WordPress plugins and I don't want to have to manually go directory by directory if I can just create groups and execute the command within that group of directories.
For each site in the group file, go to the /wp-content/plugins folder and execute the symlink command specified.
Depending on your goals, you may be able to achieve that with a one-liner using find and an -exec action. I tend to like doing it as a Bash loop, because it is easier to add additional commands instead of having a long and unwieldy command doing it all, as well as handle errors.
I do not know if this is what you intend, but here is a proposal.
#!/bin/bash
# Receives site group file as argument1
# Receives symlink name as argument 2
# Further arguments will be passed to the command called
sites_dir="/var/www/vhosts"
commands_dir="wp-content/plugins"
if ! [[ -f $1 ]] ; then
echo "Site list not found : $1"
exit
fi
while IFS= read -r site
do
site_dir="$sites_dir/$site"
if ! [[ -d $site_dir ]] ; then
echo "Unknown site : $site"
continue
fi
command="$site_dir/$commands_dir/$2"
if ! [[ -x $command ]] ; then
echo "Missing or non-executable command : $command"
continue
fi
"$command" "${#:3}"
done <"$1"

BASH save stdout to new file upon execution

please bear with me if my terminology or syntax is less than stellar (still learning). I currently have a simple bash script that checks the arguments of the command and outputs files names with matching text. This part of my script works correctly via a grep command and piped to xargs for proper formatting.
When running the script, I run through a simple loop to check if the value is null and then move to running my variable/search if not.
My question is: Is it possible to have this script output via stdout AND also save a new file each time it is run with the user input and date/time? (but not overwrite) EX: report-bob-0729161500.rpt
I saw same other suggestions to use tee with the command, but I was trying to get it to work within the script. Similarly, another suggestion stated to utilize exec > >(tee -i logfile.txt), but I am unsure how to properly format this to include the date/time and $1 input into new files each time the script is executed.
Any help or suggested resources?
Thank you.
SEARCH=`[search_variable]`
if [ -z "$SEARCH" ]
then
echo "$1 not found."
else
echo -e "REPORT LISTING\n\n"
echo "$SEARCH"
fi
EDIT: I did try simply piping the echo statements to the tee command, which does work. However, I am still curious if anyone has other suggestions to accomplish this same task via alternative methods. Thank you.
With echo statements piped to tee:
SEARCH=`[search_variable]`
DATE=`date +"%m%d%y%k%M"`
if [ -z "$SEARCH" ]
then
echo "$1 not found."
else
echo -e "REPORT LISTING\n\n" | tee tps-list-$1-$DATE.rpt
echo "$SEARCH" | tee tps-list-$1-$DATE.rpt
fi
If you want to do it within the script, why then not just write to
both standard output and the file (using append where appropriate?).
Maybe a bit more writing, but it gives complete control.
Leon

Run a shell command when a file is added

I have a folder named images on my linux box.
This folder is connected to a website and the admin of the site has the ability to add pictures to this site. However, when a picture is added, I want a command to run resizing all of the pictures a directory.
In short, I want to know how I can make the server run a specific command when a new file is added to a specific location.
I don't know how people are uploading content to this folder, but you might want to use something lower-tech than monitoring the directory with inotify.
If the protocol is FTP and you have access to your FTP server's log, I suggest tailing that log to watch for successful uploads. This sort of event-triggered approach will be faster, more reliable, and less load than a polling approach with traditional cron, and more portable and easier to debug than something using inotify.
The way you handle this will of course depend on your FTP server. I have one running vsftpd whose logs include lines like this:
Fri May 25 07:36:02 2012 [pid 94378] [joe] OK LOGIN: Client "10.8.7.16"
Fri May 25 07:36:12 2012 [pid 94380] [joe] OK UPLOAD: Client "10.8.7.16", "/path/to/file.zip", 8395136 bytes, 845.75Kbyte/sec
Fri May 25 07:36:12 2012 [pid 94380] [joe] OK CHMOD: Client "10.8.7.16", "/path/to/file.zip 644"
The UPLOAD line only gets added when vsftpd has successfully saved the file. You could parse this in a shell script like this:
#!/bin/sh
tail -F /var/log/vsftpd.log | while read line; do
if echo "$line" | grep -q 'OK UPLOAD:'; then
filename=$(echo "$line" | cut -d, -f2)
if [ -s "$filename" ]; then
# do something with $filename
fi
fi
done
If you're using an HTTP upload tool, see if that tool has a text log file it uses to record incoming files. If it doesn't consider adding some sort of logger function to it, so it'll produce logs that you can tail.
Like John commented the inotify API is a starting point, however you might be interested in some tools that use this API to perform file notification tasks:
For example incron which can be used to run cron-like tasks when file or directory changes are detected.
Or inotify-tools which is a set of command-line tools that you could use to build your own file notification shell script.
If you look at the bottom of the Wiki pake for inotify-tools you will see references to even more tools and support for higher-level languages like Python, Perl or Ruby (example code).
You might want to look at inotify
The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories. When a directory is monitored, inotify will return events for the directory itself, and for files inside the directory.
#!/bin/bash
tail -F -n0 /var/log/vsftpd.log | while read line; do
if echo "$line" | grep -q 'OK UPLOAD:'; then
filename=$(echo $line | cut -d, -f2 |awk '{print $1}')
filename="${filename%\"}"
filename="${filename#\"}"
#sleep 1s
if [ -s $filename ]; then
# do something with $filename
echo $filename
fi
fi
done
Using ghotis work
Here is what I did to get the users free space:
#!/bin/bash
tail -F -n 1 /var/log/vsftpd.log | while read line; do
if echo "$line" | grep -q 'OK LOGIN:'; then
pid=$(sed 's/.*\[\([^]]*\)\].*/\1/g' <<< "$line")
#the operator '<<<' doesnt exist in dash so use bash
if [[ $pid != *"pid"* ]]; then
echo -e "Disk 1: Contains Games:\n" > /home/vftp/"$pid"/FreeSpace.txt; df -h /media/Disk1/ >> /home/vftp/"$pid"/FreeSpace.txt
echo -e "\r\n\r\nIn order to read this properly you need to use a text editor that can read *nix format files" >> /home/vftp/"$pid"/FreeSpace.txt
fi
echo "checked"
# awk '{ sub("\r$", ""); print }' /home/vftp/"$pid"/FreeSpace.txt > /home/vftp/"$pid"/FreeSpace.txt
fi
done
If the file is added through an HTTP upload, and if your server is apache, you might want to check mod_security.
It enables you to run a script for every upload made through HTTP POST.

Resources