Bash script arguments, require or fill in specific character - string

I am writing a bash script that will output a .tgz file to a specific directory, /tmp/ by default
I would like to provide an option to override this directory and I have chosen to do so using arguments provided at the command line
while getopts d: option
do
case "${option}" in
d) dir=${OPTARG};;
esac
done
As written, this works but I've run into a snag depending on user input
The name of my .tgz file is also a variable and my code that brings this all together is
output="$dir""$name"
The problem that I run into is if the user runs
./script -d /home/user
My resulting path and filename end up as
/home/userfilename.tgz
I need to either enforce a requirement for a trailing / or insert one if the user did not.
While it works, if I change my output variable to
output="$dir"/"$name"
If the user does provide a trailing / I end up with something like this and I am trying to keep my output aesthetic.
/home/user//filename.tgz
Any input would be greatly appreciated.

Add the line
output="${output//\/\///}"
after joining dir and name.
It looks complicated, but what it does is it replaces two slashes with one.
You may find more info in here.

Related

grep empty output file

I made a shell script the purpose of which is to find files that don't contain a particular string, then display the first line that isn't empty or otherwise useless. My script works well in the console, but for some reason when I try to direct the output to a .txt file, it comes out empty.
Here's my script:
#!/bin/bash
# takes user input.
echo "Input substance:"
read substance
echo "Listing media without $substance:"
cd media
# finds names of files that don't feature the substance given, then puts them inside an array.
searchresult=($(grep -L "$substance" *))
# iterates the array and prints the first line of each - contains both the number and the medium name.
# however, some files start with "Microorganisms" and the actual number and name feature after several empty lines
# the script checks for that occurence - and prints the first line that doesnt match these criteria.
for i in "${searchresult[#]}"
do
grep -m 1 -v "Microorganisms\|^$" $i
done >> output.txt
I've tried moving the >>output.txt to right after the grep line inside the loop, tried switching >> to > and 2>&1, tried using tee. No go.
I'm honestly feeling utterly stuck as to what the issue could be. I'm sure there's something I'm missing, but I'm nowhere near good enough with this to notice. I would very much appreciate any help.
EDIT: Added files to better illustrate what I'm working with. Sample inputs I tried: Glucose, Yeast extract, Agar. Link to files [140kB] - the folder was unzipped beforehand.
The script was given full permissions to execute. I don't think the output is being rewritten because even if I don't iterate and just run a single line of the loop, the file is empty.

Iterate through files in a directory, create output files, linux

I am trying to iterate through every file in a specific directory (called sequences), and perform two functions on each file. I know that the functions (the 'blastp' and 'cat' lines) work, since I can run them on individual files. Ordinarily I would have a specific file name as the query, output, etc., but I'm trying to use a variable so the loop can work through many files.
(Disclaimer: I am new to coding.) I believe that I am running into serious problems with trying to use my file names within my functions. As it is, my code will execute, but it creates a bunch of extra unintended files. This is what I intend for my script to do:
Line 1: Iterate through every file in my "sequences" directory. (All of which end with ".fa", if that is helpful.)
Line 3: Recognize the filename as a variable. (I know, I know, I think I've done this horribly wrong.)
Line 4: Run the blastp function using the file name as the argument for the "query" flag, always use "database.faa" as the argument for the "db" flag, and output the result in a new file that is has the same name as the initial file, but with ".txt" at the end.
Line 5: Output parts of the output file from line 4 into a new file that has the same name as the initial file, but with "_top_hits.txt" at the end.
for sequence in ./sequences/{.,}*;
do
echo "$sequence";
blastp -query $sequence -db database.faa -out ${sequence}.txt -evalue 1e-10 -outfmt 7
cat ${sequence}.txt | awk '/hits found/{getline;print}' | grep -v "#">${sequence}_top_hits.txt
done
When I ran this code, it gave me six new files derived from each file in the directory (and they were all in the same directory - I'd prefer to have them all in their own folders. How can I do that?). They were all empty. Their suffixes were, ".txt", ".txt.txt", ".txt_top_hits.txt", "_top_hits.txt", "_top_hits.txt.txt", and "_top_hits.txt_top_hits.txt".
If I can provide any further information to clarify anything, please let me know.
If you're only interested in *.fa files I would limit your input to only those matching files like this:
for sequence in sequences/*.fa;
do
I can propose you the following improvements:
for fasta_file in ./sequences/*.fa # ";" is not necessary if you already have a new line for your "do"
do
# ${variable%something} is the part of $variable
# before the string "something"
# basename path/to/file is the name of the file
# without the full path
# $(some command) allows you to use the result of the command as a string
# Combining the above, we can form a string based on our fasta file
# This string can be useful to name stuff in a clean manner later
sequence_name=$(basename ${fasta_file%.fa})
echo ${sequence_name}
# Create a directory for the results for this sequence
# -p option avoids a failure in case the directory already exists
mkdir -p ${sequence_name}
# Define the name of the file for the results
# (including our previously created directory in its path)
blast_results=${sequence_name}/${sequence_name}_blast.txt
blastp -query ${fasta_file} -db database.faa \
-out ${blast_results} \
-evalue 1e-10 -outfmt 7
# Define a file name for the top hits
top_hits=${sequence_name}/${sequence_name}_top_hits.txt
# alternatively, using "%"
#top_hits=${blast_results%_blast.txt}_top_hits.txt
# No need to cat: awk can take a file as argument
awk '/hits found/{getline;print}' ${blast_results} \
| grep -v "#" > ${sequence_name}_top_hits.txt
done
I made more intermediate variables, with (hopefully) meaningful names.
I used \ to escape line ends and allow putting commands in several lines.
I hope this improves code readability.
I haven't tested. There may be typos.
You should be using *.fa if you only want files with a .fa ending. Additionally, if you want to redirect your output to new folders you need to create those directories somewhere using
mkdir 'folder_name'
then you need to redirect your -o outputs to those files, something like this
'command' -o /path/to/output/folder
To help you test this script out, you can run each line one by one to test them. You need to make sure each line works by itself before combining.
One last thing, be careful with your use of colons, it should look something like this:
for filename in *.fa; do 'command'; done

storing output of ls command consisting of files with spaces in their names

I want to store output of ls command in my bash script in a variable and use each file name in a loop, but for example one file in the directory has name "Hello world", when I do variable=$(ls) "Hello" and "world" end up as two separate entries, and when I try to do
for i in $variable
do
mv $i ~
done
it shows error that files "Hello" and "world" doesn't exist.
Is there any way I can access all files in current directory and run some command even if the files have space(s) in their names.
If you must, dirfiles=(/path/of/interest/*).
And accept the admonition against parsing the output of ls!
I understand you are new to this and I'd like to help. But it isn't easy for me (us?) to provide you with an answer that would be of much help to you by the way you've stated your question.
Based on what I hear so far, you don't seem to have a basic understanding on how parameter expansions work in the shell. The following two links will be useful to you:
Matching Pathnames, Parameters
Now, if your task at hand is to operate on files meeting certain criteria then find(1) will likely to do the job.
Say it with me: don't parse the output of ls! For more information, see this post on Unix.SE.
A better way of doing this is:
for i in *
do
mv -- "$i" ~
done
or simply
mv -- * ~

What does this bash script command mean (sed - e)?

I'm totally new to bash scripting but i want to solve this problem..
the command is:
objfil=`echo ${srcfil} | sed -e "s,c$,o,"`
the idea about the bash script program is to check for the source files, and check if there is an adjacent object file in the OBJ directory, if so, the rest of the program runs smoothly, if not, the iteration terminates and skips the current source file, and moves on to the next one.. it works with .c files but not on the headers, since the object filenames depend on .c files.. i want to write this command so it checks the object files not just the .c but the .h files too.. but without skipping them. i know i have to do something else too, but i need to understand what this line of command does exactly to move on. Thanks. (Sorry for my english)
UPDATE:
if test -r ${curOBJdir}/${objfil}
then
cp -v ${srcfil} ./SAVEDSRC/${srcfil}
fdone="NO"
linenums=ALL
else
fdone="YES"
err="${curOBJdir}/${objfil} is missing - ${srcfil} skipped)"
echo ${err}
echo ${err} >>${log}
fi
while test ${fdone} == "NO"
do
#rest of code ...
here is the rest of the program.. i tried to comment out the "test" part to ignore the comparison just because i only want my script to work on .h files, but without checking the e.g abc.h files has an abc.o file.. (the object file generation is needed because the end of the script there's a comparison between the hexdump of the original and modified object files). The whole script is for changing the basic types with typedefs like int to sint32_t for example.
This concrete command will substitute all c's right before line-end to o:
srcfill=abcd.c
objfil=`echo ${srcfil} | sed -e "s,c$,o,"`
echo $objfil
Output:
abcd.o
P.S. It uses a different match/replace separator: default is / but it uses ,.

All files in one dir, linux

Today I tried a script in linux to get all files in one dir. It was pretty straightforward, but I found something interesting.
#!/bin/bash
InputDir=/home/XXX/
for file in $InputDir'*'
do
echo $file
done
The output is:
/home/XXX/fileA /home/XXX/fileB
But when I just input the dir directly, like:
#!/bin/bash
InputDir=/home/XXX/
for file in /home/XXX/*
do
echo $file
done
The output is:
/home/XXX/fileA
/home/XXX/fileB
It seems, in the first script, there was only one loop and all the file names were stored in the variable $file in the FIRST loop, separated by space. But in the second script, one file name was stored in $file just in one loop, and there were more than one loop. What is exactly the difference between these two scripts?
Thanks very much, maybe my question is a little bit naive..
The behavior is correct and "as expected".
for file in $InputDir'*' means assign "/home/XXX/*" to $file (note the quotes). Since you quoted the asterisk, it will not be executed at this time. When the shell sees echo $file, it first expands the variables and then it does glob expansion. So after the first step, it sees
echo /home/XXX/*
and after glob expansion, it sees:
echo /home/XXX/fileA /home/XXX/fileB
Only now, it will execute the command.
In the second case, the pattern /home/XXX/* is expanded before the for is executed and thus, each file in the directory is assigned to file and then the body of the loop is executed.
This will work:
for file in "$InputDir"*
but it's brittle; it will fail, for example, when you forget to add a / to the end of the variable $InputDir.
for file in "$InputDir"/*
is a little bit better (Unix will ignore double slashes in a path) but it can cause trouble when $InputDir is not set or empty: You'll suddenly list files in the / (root) folder. This can happen, for example, because of a typo:
inputDir=...
for file in "$InputDir"/*
Case matters on Unix :-)
To help you understand code like this, use set -x ("enable tracing") in a line before the code you want to debug.
The difference is the quoting of '*'. In the first case the loop only executes once, with $file equal to /home/XXX/* which then expands to all the files in the directory when passed to echo. In the second case it executes once per file, with $file equal to each file name in turn.
Bottom line - change:
for file in $InputDir'*'
to:
for file in $InputDir*
or, better, and to make it more readable - change:
InputDir=/home/XXX/
for file in $InputDir'*'
to:
InputDir=/home/XXX
for file in $InputDir/*

Resources