I've created an (makefiles oriented) include file which contains some variables defined to be used on by others makefiles.
# this is myinclude/makevars file
MY_FOLDER:=$(ROOT_FOLDER)/my/folder
ANOTHER_FOLDER:=$(MY_FOLDER)/another/folder
MY_LIB:=$(ANOTHER_FOLDER)/lib
this "include file" works just great if I include it in other makefiles:
include myinclude/makevars
but would be cool if I might include it in some shell script too!.
Currently I've created another file (myinclude/shellvars) very similar but "shell" oriented:
# this is myinclude/shellvars file
MY_FOLDER=$ROOT_FOLDER/my/folder
ANOTHER_FOLDER=$MY_FOLDER/another/folder
MY_LIB=$ANOTHER_FOLDER/lib
clearly by including this in my (shell) scripts anything works but I have a duplicated file with (semantically) the same info!
any idea to have this two files "merged" into one (myinclude/makevars and myinclude/shellvars) ? any special syntax?
any help is clearly appreciated!
-- kasper!
Try this:
eval "$(cat makevars.inc | tr -d '(:)')"
echo "$MY_LIB"
This loads the text of the target include file into memory, erases all colons and parentheses from it and then executes the result.
Related
I am trying to iterate through every file in a specific directory (called sequences), and perform two functions on each file. I know that the functions (the 'blastp' and 'cat' lines) work, since I can run them on individual files. Ordinarily I would have a specific file name as the query, output, etc., but I'm trying to use a variable so the loop can work through many files.
(Disclaimer: I am new to coding.) I believe that I am running into serious problems with trying to use my file names within my functions. As it is, my code will execute, but it creates a bunch of extra unintended files. This is what I intend for my script to do:
Line 1: Iterate through every file in my "sequences" directory. (All of which end with ".fa", if that is helpful.)
Line 3: Recognize the filename as a variable. (I know, I know, I think I've done this horribly wrong.)
Line 4: Run the blastp function using the file name as the argument for the "query" flag, always use "database.faa" as the argument for the "db" flag, and output the result in a new file that is has the same name as the initial file, but with ".txt" at the end.
Line 5: Output parts of the output file from line 4 into a new file that has the same name as the initial file, but with "_top_hits.txt" at the end.
for sequence in ./sequences/{.,}*;
do
echo "$sequence";
blastp -query $sequence -db database.faa -out ${sequence}.txt -evalue 1e-10 -outfmt 7
cat ${sequence}.txt | awk '/hits found/{getline;print}' | grep -v "#">${sequence}_top_hits.txt
done
When I ran this code, it gave me six new files derived from each file in the directory (and they were all in the same directory - I'd prefer to have them all in their own folders. How can I do that?). They were all empty. Their suffixes were, ".txt", ".txt.txt", ".txt_top_hits.txt", "_top_hits.txt", "_top_hits.txt.txt", and "_top_hits.txt_top_hits.txt".
If I can provide any further information to clarify anything, please let me know.
If you're only interested in *.fa files I would limit your input to only those matching files like this:
for sequence in sequences/*.fa;
do
I can propose you the following improvements:
for fasta_file in ./sequences/*.fa # ";" is not necessary if you already have a new line for your "do"
do
# ${variable%something} is the part of $variable
# before the string "something"
# basename path/to/file is the name of the file
# without the full path
# $(some command) allows you to use the result of the command as a string
# Combining the above, we can form a string based on our fasta file
# This string can be useful to name stuff in a clean manner later
sequence_name=$(basename ${fasta_file%.fa})
echo ${sequence_name}
# Create a directory for the results for this sequence
# -p option avoids a failure in case the directory already exists
mkdir -p ${sequence_name}
# Define the name of the file for the results
# (including our previously created directory in its path)
blast_results=${sequence_name}/${sequence_name}_blast.txt
blastp -query ${fasta_file} -db database.faa \
-out ${blast_results} \
-evalue 1e-10 -outfmt 7
# Define a file name for the top hits
top_hits=${sequence_name}/${sequence_name}_top_hits.txt
# alternatively, using "%"
#top_hits=${blast_results%_blast.txt}_top_hits.txt
# No need to cat: awk can take a file as argument
awk '/hits found/{getline;print}' ${blast_results} \
| grep -v "#" > ${sequence_name}_top_hits.txt
done
I made more intermediate variables, with (hopefully) meaningful names.
I used \ to escape line ends and allow putting commands in several lines.
I hope this improves code readability.
I haven't tested. There may be typos.
You should be using *.fa if you only want files with a .fa ending. Additionally, if you want to redirect your output to new folders you need to create those directories somewhere using
mkdir 'folder_name'
then you need to redirect your -o outputs to those files, something like this
'command' -o /path/to/output/folder
To help you test this script out, you can run each line one by one to test them. You need to make sure each line works by itself before combining.
One last thing, be careful with your use of colons, it should look something like this:
for filename in *.fa; do 'command'; done
I am trying to create a zsh script to test my project. The teacher supplied us with some input files and expected output files. I need to diff the output files from myExecutable with the expected output files.
Question: Does $iF contain a string in the following code or some kind of bash reference to the file?
#!/bin/bash
inputFiles=~/project/tests/input/*
outputFiles=~/project/tests/output
for iF in $inputFiles
do
./myExecutable $iF > $outputFiles/$iF.out
done
Note:
Any tips in fulfilling my objectives would be nice. I am new to shell scripting and I am using the following websites to quickly write the script (since I have to focus on the project development and not wasting time on extra stuff):
Grammar for bash language
Begginer guide for bash
As your code is, $iF contains full path of file as a string.
N.B: Don't use for iF in $inputFiles
use for iF in ~/project/tests/input/* instead. Otherwise your code will fail if path contains spaces or newlines.
If you need to diff the files you can do another for loop on your output files. Grab just the file name with the basename command and then put that all together in a diff and output to a ".diff" file using the ">" operator to redirect standard out.
Then diff each one with the expected file, something like:
expectedOutput=~/<some path here>
diffFiles=~/<some path>
for oF in ~/project/tests/output/* ; do
file=`basename ${oF}`
diff $oF "${expectedOutput}/${file}" > "${diffFiles}/${file}.diff"
done
I'm totally new to bash scripting but i want to solve this problem..
the command is:
objfil=`echo ${srcfil} | sed -e "s,c$,o,"`
the idea about the bash script program is to check for the source files, and check if there is an adjacent object file in the OBJ directory, if so, the rest of the program runs smoothly, if not, the iteration terminates and skips the current source file, and moves on to the next one.. it works with .c files but not on the headers, since the object filenames depend on .c files.. i want to write this command so it checks the object files not just the .c but the .h files too.. but without skipping them. i know i have to do something else too, but i need to understand what this line of command does exactly to move on. Thanks. (Sorry for my english)
UPDATE:
if test -r ${curOBJdir}/${objfil}
then
cp -v ${srcfil} ./SAVEDSRC/${srcfil}
fdone="NO"
linenums=ALL
else
fdone="YES"
err="${curOBJdir}/${objfil} is missing - ${srcfil} skipped)"
echo ${err}
echo ${err} >>${log}
fi
while test ${fdone} == "NO"
do
#rest of code ...
here is the rest of the program.. i tried to comment out the "test" part to ignore the comparison just because i only want my script to work on .h files, but without checking the e.g abc.h files has an abc.o file.. (the object file generation is needed because the end of the script there's a comparison between the hexdump of the original and modified object files). The whole script is for changing the basic types with typedefs like int to sint32_t for example.
This concrete command will substitute all c's right before line-end to o:
srcfill=abcd.c
objfil=`echo ${srcfil} | sed -e "s,c$,o,"`
echo $objfil
Output:
abcd.o
P.S. It uses a different match/replace separator: default is / but it uses ,.
I have a problem where my config files contents are placed within my deployment script because they get their settings from my setting.sh file. This causes my deployment script to be very large a bloated.
I was wondering if it would be possible in bash to do something like this
setting.sh
USER="Tom"
log.conf
log=/$PLACEHOLDER_USER/full.log
deployment.sh
#!/bin/bash
# Pull in settings file
. ./settings.sh
# Link config to right location
ln -s /home/log.conf /home/logging/log.conf
# Write variables on top of placeholder variables in the file
for $PLACEHOLDER_* in /home/logging/log.conf
do
(Replace $PLACEHOLDER_<VARAIBLE> with $VARIABLE)
done
I want this to work for any variable found in the config file which starts with $placeholder_
This process would allow me to move a generic config file from my repository and then add the proper variables from my setting file on top of the placeholder variables in the config.
I'm stuck on how I can get this to actually work using my deployment.sh.
This small script will read all variable lines from settings.sh and replace the PLACEHOLDER_xxx in file for each. Does this help you?
while IFS== read variable value
do
sed -i "s/\$PLACEHOLDER_$variable/$value/g" file
done < settings.sh
#!/usr/local/env bash
set -x
ln -s /home/log.conf /home/logging/log.conf
while read user
do
usertmp=$(echo "${user}" | sed s'#USER=\"##' \
sed s'#"$##')
user="${usertemp}"
log="${user}"/full.log
done < setting.sh
I don't really understand the rest of what you're trying to do, I will confess, but this will hopefully give you the idea. Use read.
I am writing a bash script that will output a .tgz file to a specific directory, /tmp/ by default
I would like to provide an option to override this directory and I have chosen to do so using arguments provided at the command line
while getopts d: option
do
case "${option}" in
d) dir=${OPTARG};;
esac
done
As written, this works but I've run into a snag depending on user input
The name of my .tgz file is also a variable and my code that brings this all together is
output="$dir""$name"
The problem that I run into is if the user runs
./script -d /home/user
My resulting path and filename end up as
/home/userfilename.tgz
I need to either enforce a requirement for a trailing / or insert one if the user did not.
While it works, if I change my output variable to
output="$dir"/"$name"
If the user does provide a trailing / I end up with something like this and I am trying to keep my output aesthetic.
/home/user//filename.tgz
Any input would be greatly appreciated.
Add the line
output="${output//\/\///}"
after joining dir and name.
It looks complicated, but what it does is it replaces two slashes with one.
You may find more info in here.