SVN needs-lock checking using pre-commit hook - linux

I have pre-commit script which is taken from internet. Most of the scripts fails on different scenario. I would like to have pre-commit script which would allow to commit only if the needs-lock property been set. Which is Lock-Modify-Unlock model.
I have enabled the auto-props in the client configuration and added * = svn:needs-lock=* property as well.
Most of the script I found check the needs-lock property during the time of Adding new Files.But this checking alone will not solve the issue. During the below mentioned scenarios we can avoid the lock mechanism.
1) The developer can take out needs-lock property during edit.
2) Property can be taken out alone without modifying the file.
In the above mentioned scenarios script fails.
All ideas are welcome.

Something like the below should work.
for y in svnlook changed -t "$TXN" "$REPOS" |grep "^[AU]" | awk -F" " '{print $2}'
do
svnlook proplist -t "$TXN" "$REPOS" "$y" >/tmp/prop.txt
if (grep -iE "needs-lock" /tmp/prop.txt)
then
if echo $y | sed 's/^.*\///' | grep -i "\.";
then
echo OK
else
echo "Not allowed to lock the folder $y " >&2;
exit 1;
fi
fi
done
This will check whether the property is applied or not on all the files/folders before commit. In case you need to exclude folder from this, you need to add one more condition to check whether its a folder or file and proceed accordingly.

Related

how to move a file after grep command when there is no return result

I wanna move a file after the grep command but as I execute my script, I noticed that there are no results coming back. regardless of that, I want to move the file/s to another directory.
this is what I've been doing:
for file in *.sup
do
grep -iq "$file" '' /desktop/list/varlogs.txt || mv "$file" /desktop/first;
done
but I am getting this error:
mv: 0653-401 Cannot rename first /desktop/first/first
suggestions would be very helpful
I am not sure what the two single quotes are for in between ..."$file" '' /desktop.... With them there, grep is looking also for $file in a file called '', so grep will throw the grep: : No such file or directory error with that there.
Also pay attention to the behavior change of adding the -q or --quiet flags, as it affects the returned value of grep and will impact whether the command to the || is run or not (see man grep for more).
I can't make out exactly what you are trying to do, but you can add a couple statements to help figure out what is going on. You could run your script with bash -x ./myscript.sh to display everything that runs as it runs, or add set -x before and set +x after the for loop in the script to show what is happening.
I added some debugging to your script and changed th || to an if/then statement to expose what is happening. Try this and see if you can find where things are going awry.
echo -e "============\nBEFORE:\n============"
echo -e "\n## The files in current dir '$(pwd)' are: ##\n$(ls)"
echo -e "\n## The files in '/desktop/first' are: ##\n$(ls /desktop/first)"
echo -e "\n## Looking for '.sup' files in '$(pwd)' ##"
for file in *.sup; do
echo -e "\n## == look for '${file}' in '/desktop/list/varlogs.txt' == ##"
# let's change this to an if/else
# the || means try the left command for success, or try the right one
# grep -iq "$file" '' /desktop/list/varlogs.txt || mv -v "$file" /desktop/first
# based on `man grep`: EXIT STATUS
# Normally the exit status is 0 if a line is selected,
# 1 if no lines were selected, and 2 if an error occurred.
# However, if the -q or --quiet or --silent is used and a line
# is selected, the exit status is 0 even if an error occurred.
# note that --ignore-case and --quiet are long versions of -i and -q/ -iq
if grep --ignore-case --quiet "${file}" '' /desktop/list/varlogs.txt; then
echo -e "\n'${file}' found in '/desktop/list/varlogs.txt'"
else
echo -e "\n'${file}' not found in '/desktop/list/varlogs.txt'"
echo -e "\nmove '${file}' to '/desktop/first'"
mv --verbose "${file}" /desktop/first
fi
done
echo -e "\n============\nAFTER:\n============"
echo -e "\n## The files in current dir '$(pwd)' are: ##\n$(ls)"
echo -e "\n## The files in '/desktop/first' are: ##\n$(ls /desktop/first)"
|| means try the first command, and if it is not successful (i.e. does not return 0), then do the next command. In your case, it appears you are looking in /desktop/list/varlogs.txt to see if any .sup files in the current directory match any in the varlogs file and if not, then move them to the /desktop/first/ directory. If matches were found, leave them in the current dir. (according to the logic you have currently)
mv --verbose explain what is being done
echo -e enables interpretation of backslash escapes
set -x shows the commands that are being run/ debugging
Please respond and clarify if anything is different. I am trying to raise in the ranks to be more helpful so I would appreciate comments, and upvotes if this was helpful.
Suggesting to avoid repeated scans of /desktop/list/varlogs.txt, and remove duplicats:
mv $(grep -o -f <<<$(ls -1 *.sup) /desktop/list/varlogs.txt|sort|uniq) /desktop/first
Suggesting to test step 1. in explanation below to list the files to be moved.
Explanation
1. grep -o -f <<<$(ls -1 *.sup) /desktop/list/varlogs.txt| sort| uniq
List all the files selected in ls -1 *.sup mentioned in /desktop/list/varlogs.txt in a single scan.
-o list only matched filenames.
<<<$(ls -1 *.sup) prepare a temporary redirected input file containing all the pattern match strings. From the output of ls -1 *.sup
|sort|uniq Than, sort the list and remove duplicates (we can move the file only once).
2. mv <files-list-output-from-step-1> /desktop/first
Move all the files found in step 1 to directory /desktop/first

Determining through Shell Script if a Linux process is running with given arguments

The company I work to has a crontab set to run a given Shell Script every few minutes to perform certain complex operations without the users' intervention. This script basically executes multiple Perl scripts in a sequence, checking first if they are not running already, using the following structure as many times as there are customers:
for i in `seq 1 20`;
do
ps ax | grep ourFile10000008.p | grep pl 2>> /dev/null >> $LOG
if [ $? -eq 1 ] ; then
cd /path/to/the/script
perl ourFile10000008.pl 10000008 & 2>> $LOG
fi
ps ax | grep ourFile10000009.p | grep pl 2>> /dev/null >> $LOG
if [ $? -eq 1 ] ; then
cd /path/to/the/script
perl ourFile10000009.pl 10000009 & 2>> $LOG
fi
# (and so on, and so forth...)
done
This kind of works, except for the fact that there are now dozens of "ourFile" Perl script in our /path/to/the/script folder, and they are exact copies of each other! Every time a new customer comes online, we need to create a new replica, which makes maintaining this structure very hard to say the least.
I'm trying to make this structure run on a single file (named here as [theOneFile.pl]) that's another copy of those scripts but is called every time with a new argument. This works, but now I have to make sure I'm only running this file once per argument passed.
After some research, and thanks to This answer, I have successfully determined the argument behind a running [theOneFile.pl] through pgrep -af theOneFile.pl | tr '\000' ' '| awk '{print $4}' >> $LOG. However, this gives me a list of results to content with. To keep today's logic as intact as possible, I'm trying to determine only if there is one of these processes running with one specific argument at that given time (eg. theOneFile.pl 10000009), but I'm not sure how to do so. Any ideas?
pgref -f (which you are using) does match the pattern to the whole command line of a process, not just the process name. That said, you can use:
arg="foo"
pgrep -f "theOneFile.pl.*${arg}"
Well, the pgrep approach is prone to race conditions. Better would be to to change the script itself to use an exclusive lock - per argument.

BASH save stdout to new file upon execution

please bear with me if my terminology or syntax is less than stellar (still learning). I currently have a simple bash script that checks the arguments of the command and outputs files names with matching text. This part of my script works correctly via a grep command and piped to xargs for proper formatting.
When running the script, I run through a simple loop to check if the value is null and then move to running my variable/search if not.
My question is: Is it possible to have this script output via stdout AND also save a new file each time it is run with the user input and date/time? (but not overwrite) EX: report-bob-0729161500.rpt
I saw same other suggestions to use tee with the command, but I was trying to get it to work within the script. Similarly, another suggestion stated to utilize exec > >(tee -i logfile.txt), but I am unsure how to properly format this to include the date/time and $1 input into new files each time the script is executed.
Any help or suggested resources?
Thank you.
SEARCH=`[search_variable]`
if [ -z "$SEARCH" ]
then
echo "$1 not found."
else
echo -e "REPORT LISTING\n\n"
echo "$SEARCH"
fi
EDIT: I did try simply piping the echo statements to the tee command, which does work. However, I am still curious if anyone has other suggestions to accomplish this same task via alternative methods. Thank you.
With echo statements piped to tee:
SEARCH=`[search_variable]`
DATE=`date +"%m%d%y%k%M"`
if [ -z "$SEARCH" ]
then
echo "$1 not found."
else
echo -e "REPORT LISTING\n\n" | tee tps-list-$1-$DATE.rpt
echo "$SEARCH" | tee tps-list-$1-$DATE.rpt
fi
If you want to do it within the script, why then not just write to
both standard output and the file (using append where appropriate?).
Maybe a bit more writing, but it gives complete control.
Leon

Pre-commit hook for Subversion fails

I need most basic hook to prevent empty comment checkins. Googled, found sample bash script. Made it short and here is what I have:
#!/bin/sh
REPOS="$1"
TXN="$2"
# Make sure that the log message contains some text.
SVNLOOK=/usr/bin/svnlook
ICONV=/usr/bin/iconv
SVNLOOKOK=1
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0
if [ $SVNLOOKOK = 0 ]; then
echo "Empty log messages are not allowed. Please provide a proper log message." >&2
exit 1
fi
# Comments should have more than 5 characters
LOGMSG=$($SVNLOOK log -t "$TXN" "$REPOS" | grep [a-zA-Z0-9] | wc -c)
if [ "$LOGMSG" -lt 6 ]; then
echo -e "Please provide a meaningful comment when committing changes." 1>&2
exit 1
fi
Now I'm testing it with Tortoise SVN and here is what I see:
Commit failed (details follow): Commit blocked by pre-commit hook
(exit code 1) with output: /home/svn/repos/apress/hooks/pre-commit:
line 11: : command not found Empty log messages are not allowed.
Please provide a proper log message. This error was generated by a
custom hook script on the Subversion server. Please contact your
server administrator for help with resolving this issue.
What is the error? svnlook is in /usr/bin
I'm very new to Linux, don't understand what happens..
To debug your script you'll have to run it manually.
To do that you'll have to get the sample values for the parameters passed to it.
Change the beginning of your script to something like
#!/bin/sh
REPOS="$1"
TXN="$2"
echo "REPOS = $REPOS, TXN = $TXN" >/tmp/svnhookparams.txt
Do a commit and check the file /tmp/svnhookparams.txt for the values.
Then do another change to the script:
#!/bin/sh
set -x
REPOS="$1"
TXN="$2"
This will enable echo of all commands run by the shell.
Now run you script directly from terminal passing to it the values you got previously.
Check the output for invalid commands or empty variable assignments.
If you have problems with that, post the output here.
$PATH is empty when running hook scripts. Thus you need to specify full paths for every external command. My guess, is that grep is not found.
I'm answering my own question.
This didn't work:
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0
It had to be 1 line:
$SVNLOOK log -t "$TXN" "$REPOS" | grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0

Shell script for parsing log file

I'm writing a shell script to parse through log file and pull out all instances where sudo succeeded and/or failed. I'm realizing now that this probably would've been easier with shell's equivalent of regex, but I didn't want to take the time to dig around (and now I'm paying the price). Anyway:
sudobool=0
sudoCount=0
for i in `cat /var/log/auth.log`;
do
for word in $i;
do
if $word == "sudo:"
then
echo "sudo found"
sudobool=1;
sudoCount=`expr $sudoCount + 1`;
fi
done
sudobool=0;
done
echo "There were " $sudoCount " attempts to use sudo, " $sudoFailCount " of which failed."
So, my understanding of the code I've written: read auth.log and split it up line by line, which are stored in i. Each word in i is checked to see if it is sudo:, if it is, we flip the bool and increment. Once we've finished parsing the line, reset the bool and move to the next line.
However, judging by my output, the shell is trying to execute the individual words of the log file, typically returning '$word : not found'.
why don't you use grep for this?
grep sudo /var/log/auth.log
if you want a count pipe it to wc -l
grep sudo /var/log/auth.log | wc -l
or still better use -c option to grep, which prints how many lines were found containing sudo
grep -c sudo /var/log/auth.log
or maybe I am missing something simple here?
EDIT: I saw $sudoFailCount after scrolling, do you want to count how many failed attempts were made to use sudo ?? You have not defined any value for $sudoFailCount in your script, so it will print nothing. Also you are missing the test brackets [[ ]] around your if condition checking
Expanding on Sudhi's answer, here's a one-liner:
$ echo "There were $(grep -c ' sudo: ' /var/log/auth.log) attempts to use sudo, $(grep -c ' sudo: .*authentication failure' /var/log/auth.log) of which failed."
There were 17 attempts to use sudo, 1 of which failed.
Your error message arises from a lack of syntax in your if statement: you need to put the condition in [[brackets]]
Using the pattern matching in bash:
#!/bin/bash
sudoCount=0
while read line; do
sudoBool=0
if [[ "$line" = *sudo:* ]]; then
sudoBool=1
(( sudoCount++ ))
# do something with sudobool ?
fi
done < /var/log/auth.log
echo "There were $sudoCount attempts to use sudo."
I'm not initimately familiar with the auth.log -- what is the pattern to determine success or failure?

Resources