Parsing a conf file in bash [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Here's my config file
#comment 1
--longoption1
#comment 2
--longoption2
#comment 3
-s
#comment 4
--longoption4
I want to write a bash script that will read this .conf file, skip comments and serialize the commandline options like so.
./binary --longoption1 --longoption2 -s --longoption4

Working off of this post on sed, you just need to pipe the output from sed to xargs:
sed -e 's/#.*$//' -e '/^$/d' inputFile | xargs ./binary
As Wiimm points out, xargs can be finicky with a lot of arguments and it might split it up across multiple calls to binary. It may be better off to use sed directly:
./binary $(sed -e 's/#.*$//' -e '/^$/d' inputFile)

Related

Shell command to print the statements with N number of words present in other file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 months ago.
Improve this question
Suppose I have a file with 3 lines:
output.txt:
Maruti
Zen
Suzuki
I used the command wc -l output.txt to get no. of lines
i got output as 3
Based on the above output I have to execute a command
echo CREATE FROM (sed -n 1p OUTPUT.txt)
echo CREATE FROM (sed -n 2p OUTPUT.txt)
echo CREATE FROM (sed -n 3p OUTPUT.txt)
:
:
echo CREATE FROM (sed -n np OUTPUT.txt)
Can you please suggest a command to replace 1 2 3 .....n in the above command based on the output i get (i.e., no. of lines in my file)
I just gave a sample explanation of my use case. Please suggest a command to execute n no. of times.
You just need one command.
sed 's/^/CREATE FROM /' output.txt
See also Counting lines or enumerating line numbers so I can loop over them - why is this an anti-pattern?

How to create a Unix script to segregate data Line by Line? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have some data in a MyFile.CSV file like this:
id,name,country
100,tom cruise,USA
101,Johnny depp,USA
102,John,India
What will be the shell script to take the above file as input and segregate the data in 2 different files as per the country?
I tried using the FOR loop and then using 2 IFs inside it but I am unable to do so. How to do it using awk?
For LINE in MyFile.CSV
Do
If grep "USA" $LINE >0 Then
$LINE >> Out_USA.csv
Else
$LINE >> Out_India.csv
Done
You can try with this
grep -R "USA" /path/to/file >> Out_USA.csv
grep -R "India" /path/to/file >> Out_India.csv
Many ways to do:
One way:
$ for i in `awk -F"," '{if(NR>1)print $3}' MyFile.csv|uniq|sort`;
do
echo $i;
egrep "${i}|country" MyFile.csv > Out_${i}.csv;
done
This assumes that the country name would not clash with other columns.
If it does, then you can fine tune that by adding additional regex.
For example, it country will be the last field, then you can add $ to the grep

How to remove 'www.' with awk in output file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How can I remove all the 'www.' with awk in my output file.
e.g.: my output file has multiple sites like
abc.com
www.def.com
blabla.org
www.zxc.net
I would like to remove all the www. in my output file:
abc.com
def.com
blabla.org
zxc.net
Probably better done in sed:
sed -i 's/^www\.//g' outputFile
In awk:
awk '{gsub(/^www\./,"",$0)}1' outputFile
This is probably what you're looking for:
$ cat file
abc.com
www.def.com
blabla.org
www.zxc.net
www.org
www.acl.lanl.gov
$ sed -E 's/^www\.(([^.]+(\.|$)){2,})/\1/' file
abc.com
def.com
blabla.org
zxc.net
www.org
acl.lanl.gov
The above uses a sed that has -E for ERE support, e.g. GNU or OSX sed. Note the need for a more comprehensive input file to test if a proposed solution really works or not.

Linux Compare two text files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have two text file like below:
File1.txt
A|234-211
B|234-244
C|234-351
D|999-876
E|456-411
F|567-211
File2.txt
234-244
999-876
567-211
And I want to compare both files and get containing values like below:
Dequired output
B|234-244
D|999-876
F|567-211
$ grep -F -f file2.txt file1.txt
B|234-244
D|999-876
F|567-211
The -F makes grep search for fixed strings (not patterns). Both -F and -f are POSIX options to grep.
Note that this assumes your file2.txt does not contain short strings like 11 which could lead to false positives.
Try:
grep -f File2.txt File1.txt

Formatting Diff output in Shell Script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm currently using (diff -q directory1 directory2) to output the files in each directory that are different and printing them to a table in html.
Current output: "Files directory1/file1 and directory2/file2 differ"
What I want: "file1 has changed"
I do not want to use comm or sort the files because other applications are pulling from the files and are sensitive to ordering. Any idea on how to get this done?
you need to grep diff output for file that differ then use awk to print file name with your new format
diff -rq dir1 dir2 | grep "differ" | awk '{print $2 "has changed"}'
Will this work?
diff -q $file1 $file2 | awk '{print $2 " has changed"}'

Resources