How to source a variable from script into shell without executing the script - linux

I have the below bash script:
FILES="file_1 file_2 \
file_3"
export FILES
run_test
It has a FILES variable that is written on multiple lines and then the script calls another script.
I want to source the script to have the variable FILES defined in the shell but without calling the other script "run_test"
I tried to grep the FILES variable from script but it gets the first line only in the variable.
Any recommendations please?

You might want the consider an env.sourceme file with shared settings like FILES=.
This file can be sourced by both your original and your new script.
When you don't want/can change the original file, you need to do something else.
When your sed supports -z you can use
source <(sed -rz 's/.*(FILES="[^"]*").*/\1/' inputscript)
You can use grep when you parse the inputscript first:
source <(tr "\n" "\r" < inputscript | grep -Eo 'FILES="[^"]*"' | tr "\r" "\n")

Related

Escaping quotes in bash (Embedded awk)

I have a complex command I am passing via ssh to a remote server. I am trying to unzip a file and then change its naming structure and extension in a second ssh command. The command I have is:
ssh root#server1 "gzip -d /tmp/file.out-20171119.gz; echo file* | awk -F'[.-]' '{print $1$3".log"}'"
Obviously the " around the .log portion of the print statement are failing me. The idea is that I would strip the .out portion from the filename and end up with file20171119.log as an ending result. I am just a bit confused on the syntax or on how to escape that properly so bash interprets the .log appropriately.
The easiest way to deal with this problem is to avoid it. Don't bother trying to escape your script to go on a command line: Pass it on stdin instead.
ssh root#server1 bash -s <<'EOF'
gzip -d /tmp/file.out-20171119.gz
# note that (particularly w/o a cd /tmp) this doesn't do anything at all related to the
# line above; thus, probably buggy as given in the original question.
echo file* | awk -F'[.-]' '{print $1$3".log"}'
EOF
A quoted heredoc -- one with <<'EOF' or <<\EOF instead of <<EOF -- is passed literally, without any shell expansions; thus, $1 or $3 will not be replaced by the calling shell as they would with an unquoted heredoc.
If you don't want to go the avoidance route, you can have the shell do the quoting for you itself. For example:
external_function() {
gzip -d /tmp/file.out-20171119.gz
echo file* | awk -F'[.-]' '{print $1$3".log"}'
}
ssh root#server1 "$(declare -f external_function); external_function"
declare -f prints a definition of a function. Putting that function literally into your SSH command ensures that it's run remotely.
You need to escape the " to prevent them from closing your quoted string early, and you need to escape the $ in the awk script to prevent local parameter expansion.
ssh root#server1 "gzip -d /tmp/file.out-20171119.gz; echo file* | awk -F'[.-]' '{print \$1\$3\".log\"}'"
The most probable reason (as you don't show the contents of the root home directory in the server) is that you are uncompressing the file in the /tmp directory, but feeding to awk filenames that should exist in the root home directory.
" allows escaping sequences with \. so the correct way to do is
ssh root#server1 "gzip -d /tmp/file.out-20171119.gz; echo file* | awk -F'[.-]' '{print \$1\$3\".log\"}'"
(like you wrote in your question) this means the following command is executed with a shell in the server machine.
gzip -d /tmp/file.out-20171119.gz; echo file* | awk - F'[.-]' '{print $1$3".log"}'
You are executing two commands, the first to gunzip /tmp/file.out-2017119.gz (beware, as it will be gunzipped in /tmp). And the second can be the source for the problem. It is echoing all the files in the local directory (this is, the root user home directory, probably /root in the server) that begin with file in the name (probably none), and feeding that to the next awk command.
As a general rule.... test your command locally, and when it works locally, just escape all special characters that will go unescaped, after being parsed by the first shell.
another way to solve the problem is to use gzip(1) as a filter... so you can decide the name of the output file
ssh root#server1 "gzip -d </tmp/file.out-20171119.gz >file20171119.log"
this way you save an awk(1) execution just to format the output file. Or if you have the date from an environment variable.
DATE=`date +%Y%m%d`
ssh root#server1 "gzip -d </tmp/file.out-${DATE}.gz >file${DATE}.log"
Finally, let me give some advice: Don't use /tmp to uncompress files. /tmp is used by several distributions as a high speed temporary dir. It is normally ram based, too quick, but limited space, so uncompressing a log file there can fill up the memory of the kernel used for the ram based filesystem, which is not a good idea. Also, a log file normally expands a lot and /tmp is a local system general directory, where other users can store files named file<something> and you can clash with those files (in case you do searches with wildcard patterns, like you do in your command) Also, it is common once you know the name of the file to assign it to environment variables and use those variables, so case you need to change the format of the filename, you do it in only one place.

redirect the result of a program

There is a program that I run with command line. The output is a file. I have to run the program with various parameters so I always have to change the output filename (otherwise it will always be the same and the older will automatically be deleted) and run the program again and again. I tried :
./program param1 param2 > result1.txt
but not surprisingly
cat result1.txt
run the program. I need a command line that will automatically rename the output file at the end of the program.
I can not change the program code.
Thanks
You can enclose your line in another script that does something like:
PARAM_1="$1"
PARAM_2="$2"
CMD="./program"
$CMD $PARAM_1 $PARAM_2 > "result-${PARAM_1}-${PARAM_2}"
The scripts calls your command and redirects the output to a filename with a name that depends on the input parameters
This works with 2 parameters, but it can be easily generalised
UPDATE:
I just though of a different version that uses MD5 for the output filename, so that it will be consistent even with long, messy parameters and it's also valid for any number of params:
#!/bin/bash
HASH="$(echo "$#" | md5sum | cut -f1 -d' ')"
CMD="./program"
"$CMD" "$#" > "result-$HASH.txt"
Just rename the output filename using nanosecond date value as:
mv result.txt "result-$(date --rfc-3339=ns).txt"
at the end of your script.

Using source to include part of a file in a bash script

I've a bash script that need to use some variables included in a separate file.
Normally, for including the entire file I would use source otherfile.sh in the main script.
In this case I need to use just a portion of this file. I can't use (include) the rest of the variables included in the rest of the file.
To filter the content of the config file (let's say just as example from the tag "# start" to "# end") I use awk, but I can't redirect the output to the soruce command.
Below my code:
awk ' /'"# start"'/ {flag=1;next} /'"# end"'/{flag=0} flag { print }' config.sh
Do you know any way to redirect the output of this command into source? Is there any other way to include the output in my bash script at run-time?
By the way, I wouldn't like to create temporaty files (it seems me too dirty...)..and I've tried something found on this site, but for some reasons it doesn't work. Below there's the solution I've found.
awk ' /'"# start"'/ {flag=1;next} /'"# end"'/{flag=0} flag { print }' config.sh | source /dev/stdin
Thank you,
Luca
source can read from a process substitution in bash:
source <( awk ' ... ' config.sh )
sed allows a simpler way to get the subset of lines:
sed -n '/#start/,/#end/p' config.sh
UPDATE: It appears that this may only work in bash 4 or later.
A correct way of doing it provided by a friend today. Here's the code:
source /dev/stdin <<EOF
$(awk ' /'"# 10.216.33.133 - start"'/ {flag=1;next} /'"# 10.216.33.133 - end"'/{flag=0} flag { print }' testbed.sh)
EOF
Working perfectly...thanks Andrea! :) (and of course everyone tried to answer)

How to test linux variable within sed file?

I have a sed file that contains contains a few substitutions, it is executed on a file using the following syntax:
sed -f mysedfile file.txt > fixed_file.txt
I would like to test a system variable and depending what that variable contains, execute different sed operations on file.txt.
Would it be possible to put this logic into mysedfile?
Thank you for the help.
Perl was explicitly created to get around limitations of sed and awk. The -p mode runs a script for each line in the file. You can put it on the commandline:
perl -p -e "s/foo/\$ENV{'HOME'}/e" < files.txt
Or move the script to a file (you can remove the '\' before the $)
perl -p file.pl < files.txt
Or make the first line of your script like this so you can run it directly.
#!/usr/bin/perl -p

Pass string to script

I have a script, download, that takes a string and checks if a file has the filename of the string. If it doesn't, it then downloads it. All the filenames are in a file.
This command is not working:
cat filenames | ./download
Download source:
filename=$1
if [ ! -f $1 ];
then
wget -q http://www.example.com/nature/life/${filename}.rdf
fi
Sample filename file:
file1
file2
file3
file4
How do I pass the command output from the cat to the download script?
In your script $1 is the positional arg on the command line. ./download somefile would work, but cat filename | ./download streams the data into download, which you ignore.
You should read the advanced bash scripting guide, which will give you a good base for how bash scripting works. To fix this, change your command to:
cat filename | xargs -n 1 ./download
This will run ./download for each filename in your list. However, the filenames may have spaces or other special characters in them, which would break your script. You should look into alternatives ways of doing this, to avoid these problems.
Specifically, use a while loop to read your file. This properly escapes your filenames on each line, if they were input into the file correctly. That way, you avoid the problems cat would have with filenames like: fi/\nle.
You can pass a filename to a file that contains file names to your script:
./download filenames
And then loop through file names from the file name in $1:
$!/bin/bash
# Do sanity check
fname=$1
for f in $(<$fname); do
if [ ! -f "$f.rdf" ]; then
wget -q http://www.example.com/nature/life/${f}.rdf
fi
done

Resources