I have two files in Linux, in file a there are variables like these:
${VERSION} ${SOFTWARE_PRODUCER}
And the values of these variables are stored in file b:
VERSION=1.0.1
SOFTWARE_PRODUCER=Luc
Now how can I use command to replace the variables in file a with values in file b? Is something like sed able to do this task?
Thanks.
A simple bash loop would suffice:
$ cat a
This is file 'a' which has this variable ${VERSION}
and it has this also:
${SOFTWARE_PRODUCER}
$ cat b
VERSION=1.0.1
SOFTWARE_PRODUCER=Luc
$ cat script.bash
#!/bin/bash
while read line || [[ -n "$line" ]]
do
key=$(awk -F= '{print $1}' <<< "$line")
value=$(awk -F= '{print $2}' <<< "$line")
sed -i 's/${'"$key"'}/'"$value"'/g' a
done < b
$ ./script.bash
$ cat a
This is file 'a' which has this variable 1.0.1
and it has this also:
Luc
$
Related
While trying to run a for loop in shell I encountered a problem.
for i in `cat $file`
do
`cut -d ',' -f 2- $i`
done
I tried to cut the lines from the second column and output them, however it gave me an error: (content of the file) no such file or directory
First, you try to execute the output of the cut command.
Consider:
$ echo hello >file
$ cat file
hello
$ a=`cat file`
$ echo "$a"
hello
$ `echo "$a"`
-bash: hello: not found
$
Perhaps you just wanted to display the output of cut:
for i in `cat "$file"`
do
cut -d , -f 2- $i
done
Second, you pass cut an argument that is expected to be a filename.
You read data from $file and use it as a filename. Is that data actually a filename?
Consider:
$ echo a,b,c,d >file
$ cat file
a,b,c,d
$ a=`cat file`
$ echo "$a"
a,b,c,d
$ cut -d , -f 2- file
b,c,d
$ cut -d , -f 2- "$a"
cut: a,b,c,d: No such file or directory
Perhaps you wanted:
cut -d , -f 2- "$file"
Thirdly, your for loop splits the data in "$file" on whitespace, not by line.
Consider:
$ echo 'a b,c d' >file
$ cat file
a b,c d
$ for i in `cat file`; do echo "[$i]"; done
[a]
[b,c]
[d]
$
Perhaps you wanted to read individual lines?
while read i; do
: ...
done <file
Assume two multi-line text files that are dynamically generated during execution of a bash shell script: file1 and file2
$ echo -e "foo-bar\nbar-baz\nbaz-qux" > file1
$ cat file1
foo-bar
bar-baz
baz-qux
$ echo -e "foo\nbar\nbaz" > file2
$ cat file2
foo
bar
baz
Further assume that I wish to use awk involving an operation on the text strings of both files. For example:
$ awk 'NR==FNR{var1=$1;next} {print $var1"-"$1}' FS='-' file1 FS=' ' file2
Is there any way that I can skip having to save the text strings as files in my script and, instead, pass along the text strings to awk as variables (or as here-strings or the like)?
Something along the lines of:
$ var1=$(echo -e "foo-bar\nbar-baz\nbaz-qux")
$ var2=$(echo -e "foo\nbar\nbaz")
$ awk 'NR==FNR{var1=$1;next} {print $var1"-"$1}' FS='-' "$var1" FS=' ' "$var2"
# awk: fatal: cannot open file `foo-bar
# bar-baz
# baz-qux' for reading (No such file or directory)
$ awk '{print FILENAME, FNR, $0}' <(echo 'foo') <(echo 'bar')
/dev/fd/63 1 foo
/dev/fd/62 1 bar
I have a file called input.txt:
A 1 2
B 3 4
Each line of this file means A=1*2=2 and B=3*4=12...
So I want to output such calculation to a file output.txt:
A=2
B=12
And I want to use shell script calculate.sh to finish this task:
#!/bin/bash
while read name; do
$var1=$(echo $name | cut -f1)
$var2=$(echo $name | cut -f2)
$var3=$(echo $name | cut -f3)
echo $var1=(expr $var2 * $var3)
done
and I type:
cat input.txt | ./calculate.sh > output.txt
But my approach doesn't work. How to get this task done right?
I would use awk.
$ awk '{print $1"="$2*$3}' file
A=2
B=12
Use output redirection operator to store the output to another file.
awk '{print $1"="$2*$3}' file > outfile
In BASH you can do:
while read -r a m n; do printf "%s=%d\n" $a $((m*n)); done < input.txt > output.txt
cat output.txt
A=2
B=12
calculate.sh:
#!/bin/bash
while read a b c; do
echo "$a=$((b*c))"
done
bash calculate.sh < input.txt outputs:
A=2
B=12
For bash doing math requires double parenthesis:
echo "$((3+4))"
In Linux command using wc -L it's possible to get the length of longest line of a text file.
How do I find the length of the shortest line of a text file?
Try this:
awk '{print length}' <your_file> | sort -n | head -n1
This command gets lengths of all files, sorts them (correctly, as numbers) and, fianlly, prints the smallest number to console.
Pure awk solution:
awk '(NR==1||length<shortest){shortest=length} END {print shortest}' file
I turned the awk command into a function (for bash):
function shortest() { awk '(NR==1||length<shortest){shortest=length} END {print shortest}' $1 ;} ## report the length of the shortest line in a file
Added this to my .bashrc (and then "source .bashrc" )
and then ran it: shortest "yourFileNameHere"
[~]$ shortest .history
2
It can be assigned to a variable (Note the backtics are required):
[~]$ var1=`shortest .history`
[~]$ echo $var1
2
For csh:
alias shortest "awk '(NR==1||length<shortest){shortest=length} END {print shortest}' \!:1 "
Both awk solutions from above do not handle '\r' the way wc -L does.
For a single line input file they should not produce values greater than maximal line length reported by wc -L.
This is a new sed based solution (I was not able to shorten while keeping correct):
echo $((`sed 'y/\r/\n/' file|sed 's/./#/g'|sort|head -1|wc --bytes`-1))
Here are some samples, showing '\r' claim and demonstrating sed solution:
$ echo -ne "\rABC\r\n" > file
$ wc -L file
3 file
$ awk '{print length}' file|sort -n|head -n1
5
$ awk '(NR==1||length<shortest){shortest=length} END {print shortest}' file
5
$ echo $((`sed 'y/\r/\n/' file|sed 's/./#/g'|sort|head -1|wc --bytes`-1))
0
$
$ echo -ne "\r\r\n" > file
$ wc -L file
0 file
$ echo $((`sed 'y/\r/\n/' file|sed 's/./#/g'|sort|head -1|wc --bytes`-1))
0
$
$ echo -ne "ABC\nD\nEF\n" > file
$ echo $((`sed 'y/\r/\n/' file|sed 's/./#/g'|sort|head -1|wc --bytes`-1))
1
$
Can any one advise how to search on linux for some data between a tilde character. I need to get IP data however its been formed like the below.
Details:
20110906000418~118.221.246.17~DATA~DATA~DATA
One more:
echo '20110906000418~118.221.246.17~DATA~DATA~DATA' | sed -r 's/[^~]*~([^~]+)~.*/\1/'
echo "20110906000418~118.221.246.17~DATA~DATA~DATA" | cut -d'~' -f2
This uses the cut command with the delimiter set to ~. The -f2 switch then outputs just the 2nd field.
If the text you give is in a file (called filename), try:
grep "[0-9]*~" filename | cut -d'~' -f2
With cut:
echo "20110906000418~118.221.246.17~DATA~DATA~DATA" | cut -d~ -f2
With awk:
echo "20110906000418~118.221.246.17~DATA~DATA~DATA"
| awk -F~ '{ print $2 }'
In awk:
echo '20110906000418~118.221.246.17~DATA~DATA~DATA' | awk -F~ '{print $2}'
Just use bash
$ string="20110906000418~118.221.246.17~DATA~DATA~DATA"
$ echo ${string#*~}
118.221.246.17~DATA~DATA~DATA
$ string=${string#*~}
$ echo ${string%%~*}
118.221.246.17
one more, using perl:
$ perl -F~ -lane 'print $F[1]' <<< '20110906000418~118.221.246.17~DATA~DATA~DATA'
118.221.246.17
bash:
#!/bin/bash
IFS='~'
while read -a array;
do
echo ${array[1]}
done < ip
If string is constant, the following parameter expansion performs substring extraction:
$ a=20110906000418~118.221.246.17~DATA~DATA~DATA
$ echo ${a:15:14}
118.221.246.17
or using regular expressions in bash:
$ echo $(expr "$a" : '[^~]*~\([^~]*\)~.*')
118.221.246.17
last one, again using pure bash methods:
$ tmp=${a#*~}
$ echo $tmp
118.221.246.17~DATA~DATA~DATA
$ echo ${tmp%%~*}
118.221.246.17