BAT script to search directory for folders that match an input name - search

not sure what's the best (or better) way to write a bat script to take name input and search a directory to see whether it exists.
do I need to output the directory list to a file first before running a comparison?
and if it makes a difference, the directory is on a repository so for my purposes I'll be using 'svn list', but I thought a nice general solution for everyone would be nice.

If you are testing for a specific name within a directory, then you would generally do something like:
if exist name (echo found it) else (echo not found)
Or if the name is incomplete
if exist *name* (echo found it) else (echo not found)
If you are testing for a name anywhere within a directory tree, then I would use
dir /s /a-d *name* >nul && (echo found it) || (echo not found)
If you are issuing a command that generates lines of output and you want to test if a name exists within any one line, then Windows pipes generally work fine, as long as the size of the output is not huge and the 2nd half can keep up with the 1st half.
yourCommand | find "name" >nul && (echo found it) || (echo not found)
But pipes become inefficient if the 2nd half is slow compared to the 1st half, and a lot of data must be buffered. In that case it is definitely better to use a temp file instead of a pipe. I incorporate a random number into the temp file name to guard against possible collision of multiple processes using the same temp directory.
set tempFile="%temp%\myTempFileBaseName%random%.txt"
yourCommand >%tempFile%
<%tempFile% find "name" >nul && (echo found it) || (echo not found)
del %tempFile%
I generally use pipes unless I know I have a performance issue.

Related

Linux script variables to SCP and delete files

I am looking to set up a script to do the following:
1st: SCP a directory on the first day of month to another server
2nd: Delete the directory after successful transfer
The directory I need to move will always have a different name, and the lowest numbered one is always the one that needs to move:
2018/files/02/
2018/files/03/
So what im looking to write up is something like:
scp /2018/files/% user#host:/backups/2018/files/
{where % = lowest num} &&
rm -rf /2018/files/%
{where % = lowest num} &&
exit
Thanks for any advice
If you are open to using Ruby, you could accomplish it with something like this:
def file_number(filespec)
filespect.split('/').last.to_i
end
directories = Dir['/2018/files'].select { |f| File.directory?(f) }
sorted_dirs = directories.sort_by do |dir1, dir2|
file_number(dir1) <=> file_number(dir1)
end
dir_to_copy = sorted_dirs.first
destination_dir = File.join('/', 'backups', dir_to_copy)
`scp #{dir_to_copy} user#host:#{destination_dir}`
`rm -rf #{dir_to_copy}`
I have not tested this, but if you have any problems, let me know what they are and I can work through it with you.
While using shell scripting eliminates the need for the Ruby interpreter, to me the code is not nearly as straightforward.
In very large directory lists (maybe 10,000's?) the sort might be intolerably slow, and another method would be needed to optimize for speed.
I would caution you against doing an unconditional rm -rf after the backup -- that seems really risky to me.
The big challenge here is to actually find the right files to copy, and shudder, delete. So let us call that step 0.
Let's start with some boiler plate
sourceD=/2018/files/
targetD=/backups/2018/files/
And a little assertion, which bails out from the script if $1 does not equate to a directory.
assert_directory() { (cd ${1:?directory name}) || exit; }
step 0: Identify directory:
assert_directory $sourceD
to_be_archived=$(
# source must be two characters, hence "??"
# source must a directory, hence trailing "/"
# set -- sorts its arguments
# First match must be our source
set -- $sourceD/??/ &&
assert_directory "$1"
echo ${1:?nothing found}
) || exit
This is only a couple of lines of condensed code. Note that this may
cause trouble if you (accidentally) run this multiple times in a row.
Step 1, Copy files now appears to be the easy part.
scp -r ${to_be_archived:?} user#host:${targetD:?}
This is a simple method for copying files, but also slow and risky.
Lookup rsync over ssh for alternatives.
Step 2, Remove
The rm -fr line will do the job, but I won't include that here.
We are missing an essential step, as we need to make sure that our
files have arrived safely. Again, rsync has options for that.
In summary:
assert_directory() { (cd ${1:?directory name}) || exit; }
assert_directory $sourceD
to_be_archived=$(
set -- $sourceD/??/ &&
assert_directory "$1"
echo ${1:?nothing found}
) || exit
This will give you the first two-character name directory (if one exists) in sourceD or abort the running script. It will break if $sourceD contains spaces.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Conversion of dpn, type commands from windows to bash

I recently try linux (from windows), and I find it difficult to process my following windows command to linux bash.
The windows command was:
set /p cutoff=Set BLAST E-Value Cutoff[1e-]:
for %%F in (*.fa) do program.exe -parameter1 %%F -parameter2_cutoff 1e-%cutoff% -output_file %%~dpnF.fas & type %%F %%~dpnF.fas > %%~dpnF.txt
This script takes a numeric value from user and uses it to run a program in every .fa files on a folder with the desired cutoff. Here %%~dpnF takes only the filename (without file extension). In this very script, I join the content of each input file (.fa) and its generated output (.fas) and finally merge them in final output (.txt). Here, for each Input file, there will be a final output file.
To run it in ubuntu , I try
echo "Set BLAST E-Value Cutoff[1e-]:"
read cutoff
for $f in *.fa; do program -parameter1 $f -parameter2_cutoff 1e-$cutoff -output_file $~dpnF.fas & cat $f $~dpnF.fas > $~dpnF.txt; done
Immediately it shows that linux is not supporting dpn type of command in windows and also the scripts terminates abruptly, showing no output.
Although I understand the different file extensions are not very meaningful in linux, but I have to keep it this way for other programs to process them.
I appreciate any type of help.
Thanks
The sequence %~dpn is used to get:
%~d - The drive
%~p - The path
%~n - The file name
Check the meaning of all expansions here.
The drive has no meaning in Linux. The path, full or partial, could be extracted with the command dirname and the filename could be extracted with the command basename.
The sequence %%~dpn means to get the whole pathname from root (/).
In fact, you do not need that in Linux, if a list of files was created with *.f, the list of files will be relative to the "present working directory" (command pwd), no need to extend them.
And to strip the extension from a filename, use ${f%.*}.
That cuts the string in "$f" at the last dot . and anything that follows *.
Then just add the extension you want: ${f%.*}.fas
Also, the character & has the meaning of "run the previous command in the background", which is not what you want.
And finally, the for $f should be replaced by for f.
This is a cleaner translation:
echo "Set BLAST E-Value Cutoff[1e-]:"
read cutoff
for f in *.fa; do
program -parameter1 "$f" \
-parameter2_cutoff "1e-$cutoff" \
-output_file "${f%.*}.fas"
cat "$f" "${f%.*}.fas" > "${f%.*}.txt"
done

bash -- copying and change filename

I need to copy all files from
/dirA/[NAME].20151231.txt
to
/dirB/20151231.[NAME].txt
and
/dirC/20151231/[NAME].txt
i.e. I need to copy the files, but change the name.
You can assume that I know the "date" string before hand, so we can assume 20151231 is a supplied argument.
if I have a list of names, I can do something like
for n in $names; do; cp /dirA/$n.$date.txt /dirB/$date.$n.txt; done;
But what if I dont have a list of names? I am looking for an elegant solution as extracting them from dirA sounds a bit cumbersome.
Thanks!
A reasonably reliable way of processing this material is:
date=20151231
cd /dirA || exit 1
mkdir -p "/dirC/$date" || exit 1
for file in *."$date.txt"
do
name="${file%.$date.txt}"
cp "$file" "/dirB/$date.$name.txt"
cp "$file" "/dirC/$date/$name.txt"
done
The cd operation is checked; if it fails, there is no point in continuing. Likewise, the mkdir -p operation ensures that the dated directory under /dirC exists or exits. The relevant error messages were already generated by cd and mkdir.
Using the shell globbing to generate the file names is best; it avoids issues with 'what happens if the file name contains spaces (or newlines, or other unexpected characters)'.
The assignment extracts the '[NAME]' portion of the file name. This is then used to copy the file from /dirA to the relevant locations under /dirB and /dirC. It would be feasible to check that /dirB and /dirC also exist if you thought that was necessary.
Maybe I am just awful at asking questions. What I was looking for was a "sed for file names". And I found the answer -- that's rename.

how to print the ouput/error to a text file?

I'm trying to redirect(?) my standard error/output to a text file.
I did my research, but for some reason the online answers are not working for me.
What am I doing wrong?
cd /home/user1/lists/
for dir in $(ls)
do
(
echo | $dir > /root/user1/$dir" "log.txt
) > /root/Desktop/Logs/Update.log
done
I also tried
2> /root/Desktop/Logs/Update.log
1> /root/Desktop/Logs/Update.log
&> /root/Desktop/Logs/Update.log
None of these work for me :(
Help please!
Try this for the basics:
echo hello >> log.txt 2>&1
Could be read as: echo the word hello, redirecting and appending STDOUT to the file log.txt. STDERR (file descriptor 2) is redirected to wherever STDOUT is being pointed. Note that STDOUT is the default and thus there is no "1" in front of the ">>". Works on the current line only.
To redirect and append all output and error of all commands in a script, put this line near the top. It will be in effect for the length of the script instead of doing it on each line:
exec >>log.txt 2>&1
If you are trying to obtain a list of the files in /home/user1/lists, you do not need a loop at all:
ls /home/usr1/lists/ >Update.log
If you are attempting to run every file in the directory as an executable with a newline as its input, and collect the output from all these programs in Update.log, try this:
for file in /home/user1/lists/*; do
echo | "$file"
done >Update.log
(Notice how we avoid the useless use of ls and how there is no redirection inside the loop.)
If you want to create an empty file called *.log.txt for each file in the directory, you would do
for file in /home/user1/lists/*; do
touch "$(basename "$file")"log.txt
done
(Using basename to obtain the file name without the directory part avoids the cd but you could do it the other way around. Generally, we tend to avoid changing the directory in scripts, so that the tool can be run from anywhere and generate output in the current directory.)
If you want to create a file containing a single newline, regardless of whether it already exists or not,
for file in /home/user1/lists/*; do
echo >"$(basename "$file")"log.txt
done
In your original program, you redirect the echo inside the loop, which means that the redirection after done will not receive any output at all, so the created file will be empty.
These are somewhat wild guesses at what you might actually be trying to accomplish, but should hopefully help nudge you slightly in the right direction. (This should properly be a comment, I suppose, but it's way too long and complex.)

Resources