Linux Shell Script: Writing file and string together to another file - linux

This is my code is bash script:
for string in $LIST
do
echo "$string" >> file
grep -v "#" file1 >> file
done
"file1" contains only one value, and that value changes iteratively.
The result I get in "file" is:
a
1
b
2
c
3
However, I want this:
a 1
b 2
c 3
Thanks in advance.

for string in $LIST
do
echo "$string" $(grep -v "#" file1)
done > file

Related

In Linux: How to repeat multiple lines in a file in the same order?

The input file contains below lines:
a
b
c
I want the output as(n times):
a
b
c
a
b
c
I've tried below command but it doesn't maintain the order
while read line; do for i in {1..4}; do echo "$line"; done; done < file
but the output is
a
a
b
b
c
c
Using seq with xargs:
seq 2 | xargs -Inone cat file
Another solution could be
#multicat count filename(s)
multicat() {
local count=$1
shift
for((i=0;i < $count; i++)) {
cat "$#"
}
}
multicat 3 abc # outputs the "abc" file 3 times
printf and brace expansion can be used to repeat a string N times, which can then be passed as input to cat which does its job of concatenate
$ printf "file %.s" {1..4}
file file file file $
$ cat $(printf "file %.s" {1..4})
a
b
c
a
b
c
a
b
c
a
b
c
With perl, if file is small enough for memory requirements to be slurped whole
$ perl -0777 -ne 'print $_ x 2' file
a
b
c
a
b
c
A small solution for printing file 3 times:
cat $(yes file | head -n 3)
The command substitution $() expands to cat file file file.
This works only for filenames without whitespace and special symbols like *. If needed, you can set IFS=$'\n' and set -o noglob.

Read multiple arguments per line from file and do arithmetic with shell script

I have a file called input.txt:
A 1 2
B 3 4
Each line of this file means A=1*2=2 and B=3*4=12...
So I want to output such calculation to a file output.txt:
A=2
B=12
And I want to use shell script calculate.sh to finish this task:
#!/bin/bash
while read name; do
$var1=$(echo $name | cut -f1)
$var2=$(echo $name | cut -f2)
$var3=$(echo $name | cut -f3)
echo $var1=(expr $var2 * $var3)
done
and I type:
cat input.txt | ./calculate.sh > output.txt
But my approach doesn't work. How to get this task done right?
I would use awk.
$ awk '{print $1"="$2*$3}' file
A=2
B=12
Use output redirection operator to store the output to another file.
awk '{print $1"="$2*$3}' file > outfile
In BASH you can do:
while read -r a m n; do printf "%s=%d\n" $a $((m*n)); done < input.txt > output.txt
cat output.txt
A=2
B=12
calculate.sh:
#!/bin/bash
while read a b c; do
echo "$a=$((b*c))"
done
bash calculate.sh < input.txt outputs:
A=2
B=12
For bash doing math requires double parenthesis:
echo "$((3+4))"

How can I pass variables to a script one line at a time in bash?

I have a list of files and titles set out as below:
Title file1.txt
Title2 file2.txt
Title3 file3.txt
How can I pass this to a script line by line, setting column 1 and 2 as separate variables. e.g.
send Title as $1 and file1.txt as $2 to my script.
then send Title2 as $1 and file2.txt as $2 to the same script.
I don't know if there is a simpler way to do this but any help would be appreciated, thanks.
Just read the file line by line, use parameter expansion to extract the titles and file names.
while read -r title file ; do
echo Title is "$title", file is "$file".
done < input.lst
If the titles can contain spaces, it gets a bit more complicated:
while read -r line ; do
title=${line% *} # Remove everything from the first space.
title=${title%%+( )} # Remove trailing spaces.
file=${line##* } # Remove everything up to the last space.
echo Title is "$title", file is "$file".
done < input.lst
Try:
for i in "Title file1.txt" "Title2 file2.txt" "Title3 file3.txt"; do Title $i; done
This is effectively like doing:
$ for i in "a b" "c d" "e f"; do echo $i; done
a b
c d
e f
You could try making other script that run your objetive script:
#! /bin/bash
ls /path/where/files/stay >> try.txt
a=1
while [ $a -lt 7 ]
do
./script $(sed "$a"'q;d' try.txt) $(sed "$(($a+1))q;d" try.txt)
a=$(($a+2))
done
This script would run your script taking variables that you like from a file.

How to replace the second string by using the first string

i am trying to replace the second string by parameter 2.
the script takes the 2 parameters,then the script should check if the first parameter exist in file , if exist it should check in which line it is existing and it should replace only the second string in that file.
for ex: while running i am passing 2 parameters 1 and 2
./run.sh 1 2
the script should check if the parameter 1 exists if not it should write the parameter to file...now that is happening..
now if i pass the parameter 1 3 to script
the script should search where the parameter 1 is and replace 2nd string i.e 2 with 3..
How can i do this???
here is what i have tried
#!/bin/sh
#
FILE_PATH=/home/user/Desktop/script
FILE_NAME=$FILE_PATH/new.txt
echo $1
echo $2
param1=`cat $FILE_NAME | grep $1
if [ -z "$param1" ]
then
echo $1:$2 >> $FILE_NAME
else
param2=`cat $FILE_NAME | grep $1`
fi
the file which i am referring will have text like this
+abc.3434.res:192.168.2.34:5400
+efg.3123.co3:192.168.2.24:5440
+klm.gsdg.cm5:192.168.2.64:5403
if i pass parameter 1 as abc.3434.res and parameter 2 as 156.666.554.778
the script should replace
+abc.3434.res:192.168.2.34:5400 with
+abc.3434.res:156.666.554.778:5400
This will look for all lines in the format you describe, and with the first parameter matching, and replace the middle bits with the second parameter.
sed -i -e "s/\(+$1:\).*\(:.*\)/\1$2\2/" $FILENAME
You can use Awk:
awk -v f1="+$1" -v f2="$2" -F: '$1==f1 { $2=v2; s=1 }
1
END { if (!s) print f1 ":" f2 }' "$FILE_NAME"
Because Awk cannot access the shell's variables directly, we pass the values in with -v. The first condition matches when the first field is found; then the second field is changed, and s is set to 1 as a signal to ourselves that a substitution has taken place. The next line is unconditional, and simply prints all lines. At end of file, we add the new data if no match was found (s is zero).
use the following format:
sed s/olsstring/newstring/g
such as :
cat /etc/passwd |sed s/root/jeff/g
But if you want to store to file immedaitly, you should use -i:
sed -i s/olsstring/newstring/g yourfile
But grep and another command:
shell has a variable $? , it print return status
if you want to invoke for grep command, you should use :
grep -i blahblah |egrep -v egrep
if [$0 == 0 ];then
echo SUCCESS
fi;

Multi thread shell script

can anybody help me with writing a multi thread shell script
Basically i have two files one file contain around >10K lines(main_file) and another contain around 200 line(sub_file). These 200 lines contain repeated string sorted of main file.I'm trying make separate files for each string to other file using below command
i have collected the string which are repeated into sub_file.
The string are present randomly in main_file.
a=0
while IFS= read -r line
do
a=$(($a+1));
users[$a]=$line
egrep "${line}" $main_file >> $line
done <"$sub_file"
if i make to use in single thread it take more time so thinking to use multithread process and complete the process in minimum time..
help me out...
The tool you need for that is gnu parallel:
parallel egrep '{}' "$mainfile" '>' '{}' < "$sub_file"
You can adjust the number of jobs processed with the option -P:
parallel -P 4 egrep '{}' "$mainfile" '>' '{}' < "$sub_file"
Please see the manual for more info.
By the way to make sure that you don't process a line twice you could make the input unique:
awk '!a[$0]++' "$sub_file" | parallel -P 4 egrep '{}' "$mainfile" '>' '{}'
NOTE: Posting from my previous post. This is not directly applicable, but very similar to tweak
I have a file 1.txt with the below contents.
-----cat 1.txt-----
1234
5678
1256
1234
1247
I have 3 more files in a folder
-----ls -lrt-------
A1.txt
A2.txt
A3.txt
The contents of those three files are similar format with different data values (All the three files are tab delimited)
-----cat A1.txt----
A X 1234 B 1234
A X 5678 B 1234
A X 1256 B 1256
-----cat A2.txt----
A Y 8888 B 1234
A Y 9999 B 1256
A X 1234 B 1256
-----cat A3.txt----
A Y 6798 C 1256
My objective is to do a search on all the A1,A2 and A3 (Only for the 3rd column of the TAB delimited file)for text in 1.txt
and the output must be redirected to the file matches.txt as given below.
Code:
/home/A1.txt:A X 1234 B 1234
/home/A1.txt:A X 5678 B 1234
/home/A1.txt:A X 1256 B 1256
/home/A2.txt:A X 1234 B 1256
The following should work.
cat A*.txt | tr -s '\t' '|' > combined.dat
{ while read myline;do
recset=`echo $myline | cut -f19 -d '|'|tr -d '\r'`
var=$(grep $recset 1.txt|wc -l)
if [[ $var -ne 0 ]]; then
echo $myline >> final.dat
fi
done } < combined.dat
{ while read myline;do
recset=`echo $myline | cut -f19 -d '|'|tr -d '\r'`
var=$(grep $recset 1.txt|wc -l)
if [[ $var -ne 0 ]]; then
echo $myline >> final2.dat
fi
done } < combined.dat
Using AWK
awk 'NR==FNR{a[$0]=1}$3 in a{print FILENAME":"$0}' 1.txt A* > matches.txt
For pipe delimited
awk –F’|’ 'NR==FNR{a[$0]=1}$3 in a{print FILENAME":"$0}' 1.txt A* > matches.txt

Resources