Get string from file - linux

In my Packages file i have multiple packages. I'm able to check the file if a string is inside, and if so, i would like to get the version of the file.
Package: depictiontest
Version: 1.0
Filename: ./debs/com.icr8zy.depictiontest.deb
Size: 810
Description: Do not install. Testing Depiction.
Name: Depiction Test
so the above is part of the many similar looking info of a package. Each time i detected if the package exists i would like to get the Version. is there any possible way?
btw, this is what i use to get check if the file exists.
if grep -q "$filename" /location/Packages; then
#file exists
#get file version <-- stuck here
else
#file does not exists
fi
EDIT:
Sorry but maybe i wasn't clear in explaining myself, I would already have the Name of the package and would like to extract the Version of that package only. I do not need a loop to get all the Names and Versions. Hope this clears it... :)

How do you extract the file name in the first place? Why not parse the whole file, then filter out nonexistent file names.
awk '/^Package:/{p=$2}
/^Version:/{v=$2}
/^Filename:/{f=$2}
/^$/{print p, v, f}' Packages |
while read p v f; do
test -e "$f" || continue
echo "$p $v"
done
This is not robust with e.g. file names with spaces, but Packages files don't have file names with spaces anyway. (Your example filename is nonstandard, though; let's assume it's no worse than this.)
You want to make sure there's an empty line at the end of Packages, or force it with { sed '$/^$/d' Packages; echo; } | awk ...
Edit: This assumes a fairly well-formed Packages file, with an empty line between records. If a record lacks one of these fields, the output will repeat the value from the previous record - that's nasty. If there are multiple adjacent empty lines, it will output the same package twice. Etc. If you want robust parsing, I'd switch to Perl or Python, or use a standard Debian tool (I'm sure there must be one).

With grep, you can pick a certain amount of lines before or after a keyword.
egrep -A1 "^Package: depictiontest" /path/to/file
would yield 1 additional line after the match.
egrep -B1 "^Filename: .*depictiontest.*" /path/to/file
would yield 1 additional line before the match.
egrep "^(Package|Version): " "^Package: depictiontest" /path/to/file
would lead to package and Version lines only, so would rely on them being in the correct order, to find out easily, which version belongs to which package.

If the order is same then you can parse the whole file and feed values in to an array -
awk -F": " '
/^Package/{p=$2;getline;v=$2;getline;f=$2;ary[p"\n"v"\n"f"\n"]}
END{for (x in ary) print x}' file
Test:
[jaypal:~/Temp] cat file
Package: depictiontest
Version: 1.0
Filename: ./debs/com.icr8zy.depictiontest.deb
Size: 810
Description: Do not install. Testing Depiction.
Name: Depiction2fdf Test
Package: depi444ctiontest
Version: 1.05
Filename: ./debs/coffm.icr8zy.depictiontest.deb
Size: 810
Description: Do not install. Testing Depiction.
Name: Depiction Test
Package: depic33tiontest
Version: 1.01
Filename: ./d3ebs/com.icr8zy.depictiontest.deb
Size: 810
Description: Do not install. Testing Depiction.
Name: Depiction Test
[jaypal:~/Temp] awk -F": " '/^Package/{p=$2;getline;v=$2;getline;f=$2;ary[p"\n"v"\n"f"\n"]}END{for (x in ary) print x}' file
depi444ctiontest
1.05
./debs/coffm.icr8zy.depictiontest.deb
depic33tiontest
1.01
./d3ebs/com.icr8zy.depictiontest.deb
depictiontest
1.0
./debs/com.icr8zy.depictiontest.deb

If "Version: ..." line is always exactly 1 line before the "Filename: ..." line, then you may try something like this:
line_number=$(grep -n "$filename" /location/Packages | head -1 | cut -d: -f1)
if (( $line_number > 0 )); then
#file exists
version=$(head -n $(( $line_number - 1 )) /location/Packages | tail -1 | cut -d' ' -f2)
else
#file doesn't exist
fi

The simplest awk implementation that I can think of:
$ awk -F':' -v package='depictiontest' '
$1 == "Package" {
trimmed_package_name = gensub(/^ */, "", "", $2)
found_package = (trimmed_package_name == package)
}
found_package && $1 == "Version" {
trimmed_version_number = gensub(/^ */, "", "", $2)
print trimmed_version_number
}
' Packages
1.0
This processes the file (Packages) line-by-line and sets the found_package flag if the line starts with 'Package' and the value after the field separator (-F), which is : (and any whitespace) is the value passed into the package variable (-v). Then, if the flag is set an we find a line beginning with a 'Version' field, we print the value after the field separator (trimming leading whitespace). If another 'Package' field is found and the name is not the one that we are looking for, the flag is reset and the subsequent version number will not be printed.

Related

Using awk to delete multiple lines using argument passed on via function

My input.csv file is semicolon separated, with the first line being a header for attributes. The first column contains customer numbers. The function is being called through a script that I activate from the terminal.
I want to delete all lines containing the customer numbers that are entered as arguments for the script. EDIT: And then export the file as a different file, while keeping the original intact.
bash deleteCustomers.sh 1 3 5
Currently only the last argument is filtered from the csv file. I understand that this is happening because the output file gets overwritten each time the loop runs, restoring all previously deleted arguments.
How can I match all the lines to be deleted, and then delete them (or print everything BUT those lines), and then output it to one file containing ALL edits?
delete_customers () {
echo "These customers will be deleted: "$#""
for i in "$#";
do
awk -F ";" -v customerNR=$i -v input="$inputFile" '($1 != customerNR) NR > 1 { print }' "input.csv" > output.csv
done
}
delete_customers "$#"
Here's some sample input (first piece of code is the first line in the csv file). In the output CSV file I want the same formatting, with the lines for some customers completely deleted.
Klantnummer;Nationaliteit;Geslacht;Title;Voornaam;MiddleInitial;Achternaam;Adres;Stad;Provincie;Provincie-voluit;Postcode;Land;Land-voluit;email;gebruikersnaam;wachtwoord;Collectief ;label;ingangsdatum;pakket;aanvullende verzekering;status;saldo;geboortedatum
1;Dutch;female;Ms.;Josanne;S;van der Rijst;Bliek 189;Hellevoetsluis;ZH;Zuid-Holland;3225 XC;NL;Netherlands;JosannevanderRijst#dayrep.com;Sourawaspen;Lae0phaxee;Klant;CZ;11-7-2010;best;tand1;verleden;-137;30-12-1995
2;Dutch;female;Mrs.;Inci;K;du Bois;Castorweg 173;Hengelo;OV;Overijssel;7557 KL;NL;Netherlands;InciduBois#gustr.com;Hisfireeness;jee0zeiChoh;Klant;CZ;30-8-2015;goed ;geen;verleden;188;1-8-1960
3;Dutch;female;Mrs.;Lusanne;G;Hijlkema;Plutostraat 198;Den Haag;ZH;Zuid-Holland;2516 AL;NL;Netherlands;LusanneHijlkema#dayrep.com;Digum1969;eiTeThun6th;Klant;Achmea;12-2-2010;best;mix;huidig;-335;9-3-1973
4;Dutch;female;Dr.;Husna;M;Hoegee;Tiendweg 89;Ameide;ZH;Zuid-Holland;4233 VW;NL;Netherlands;HusnaHoegee#fleckens.hu;Hatimon;goe5OhS4t;Klant;VGZ;9-8-2015;goed ;gezin;huidig;144;12-8-1962
5;Dutch;male;Mr.;Sieds;D;Verspeek;Willem Albert Scholtenstraat 38;Groningen;GR;Groningen;9711 XA;NL;Netherlands;SiedsVerspeek#armyspy.com;Thade1947;Taexiet9zo;Intern;CZ;17-2-2004;beter;geen;verleden;-49;12-10-1961
6;Dutch;female;Ms.;Nazmiye;R;van Spronsen;Noorderbreedte 180;Amsterdam;NH;Noord-Holland;1034 PK;NL;Netherlands;NazmiyevanSpronsen#jourrapide.com;Whinsed;Oz9ailei;Intern;VGZ;17-6-2003;beter;mix;huidig;178;8-3-1974
7;Dutch;female;Ms.;Livia;X;Breukers;Everlaan 182;Veenendaal;UT;Utrecht;3903
Try this in loop..
awk -v variable=$var '$1 != variable' input.csv
awk - to make decision based on columns
-v - to use a variable into a awk command
variable - store the value for awk to process
$var - to search for a specific string in run-time
!= - to check if not exist
input.csv - your input file
It's awk's behavior, when you use -v it can will work with variable on run-time and provide an output that doesn't contain the value you passed. This way, you get all the values that are not matching to your variable. Hope this is helpful. :)
Thanks
This bash script should work:
!/bin/bash
FILTER="!/(^"$(echo "$#" | sed -e "s/ /\|^/g")")/ {print}"
awk "$FILTER" input.csv > output.csv
The idea is to build an awk relevant FILTER and then use it.
Assuming the call parameters are: 1 2 3, the filter will be: !/(^1|^2|^3)/ {print}
!: to invert matching
^: Beginning of the line
The input data are in the input.csv file and output result will be in the output.csv file.

Use sed or awk to replace line after match

I'm trying to create a little script that basically uses dig +short to find the IP of a website, and then pipe that to sed/awk/grep to replace a line. This is what the current file looks like:
#Server
123.455.1.456
246.523.56.235
So, basically, I want to search for the '#Server' line in a text file, and then replace the two lines underneath it with an IP address acquired from dig.
I understand some of the syntax of sed, but I'm really having trouble figuring out how to replace two lines underneath a match. Any help is much appreciated.
Based on the OP, it's not 100% clear exactly what needs to replaced where, but here's a a one-liner for the general case, using GNU sed and bash. Replace the two lines after "3" with standard input:
echo Hoot Gibson | sed -e '/3/{r /dev/stdin' -e ';p;N;N;d;}' <(seq 7)
Outputs:
1
2
3
Hoot Gibson
6
7
Note: sed's r command is opaquely documented (in Linux anyway). For more about r, see:
"5.9. The 'r' command isn't inserting the file into the text" in this sed FAQ.
here's how in awk:
newip=12.34.56.78
awk -v newip=$newip '{
if($1 == "#Server"){
l = NR;
print $0
}
else if(l>0 && NR == l+1){
print newip
}
else if(l==0 || NR != l+2){
print $0
}
}' file > file.tmp
mv -f file.tmp file
explanation:
pass $newip to awk
if the first field of the current line is #Server, let l = current line number.
else if the current line is one past #Server, print the new ip.
else if the current row is not two past #Server, print the line.
overwrite original file with modified version.

find a pattern and print line based on finding the first pattern sed, awk grep

I have a rather large file. What is common to all is the hostname to break each section example :
HOSTNAME:host1
data 1
data here
data 2
text here
section 1
text here
part 4
data here
comm = 2
HOSTNAME:host-2
data 1
data here
data 2
text here
section 1
text here
part 4
data here
comm = 1
The above prints
As you see above, in between each section there are other sections broken down by key words or lines that have specific values
I like to use a oneliner to print host name for each section and then print which ever lines I want to extract under each hostname section
Can you please help. I am using now grep -C 10 HOSTNAME | gerp -C pattern
but this assumes that there are 10 lines in each section. This is not an optimal way to do this; can someone show a better way. I also need to be able to print more than one line under each pattern that I find . So if I find data1 and there are additional lines under it I like to grab and print them
So output of command would be like
grep -C 10 HOSTNAME | grep data 1
grep -C 10 HOSTNAME | grep -A 2 data 1
HOSTNAME:Host1
data 1
HOSTNAME:Hoss2
data 1
Beside Grep I use this sed command to print my output
sed -r '/HOSTNAME|shared/!d' filename
The only problem with this sed command is that it only prints the lines that have patterns shared & HOSTNAME in them. I also need to specify the number of lines I like to print in my case under the line that matched patterns shared. So I like to print HOSTNAME and give the number of lines I like to print under second search pattern shared.
Thanks
awk to the rescue!
$ awk -v lines=2 '/HOSTNAME/{c=lines} NF&&c&&c--' file
HOSTNAME:host1
data 1
HOSTNAME:host-2
data 1
print lines number of lines including pattern match, skips empty lines.
If you want to specify secondary keyword instead number of lines
$ awk -v key='data 1' '/HOSTNAME/{h=1; print} h&&$0~key{print; h=0}' file
HOSTNAME:host1
data 1
HOSTNAME:host-2
data 1
Here is a sed twoliner:
sed -n -r '/HOSTNAME/ { p }
/^\s+data 1/ {p }' hostnames.txt
It prints (p)
when the line contains a HOSTNAME
when the line starts with some whitespace (\s+) followed by your search criterion (data 1)
non-mathing lines are not printed (due to the sed -n option)
Edit: Some remarks:
this was tested with GNU sed 4.2.2 under linux
you dont need the -r if your sed version does not support it, replace the second pattern to /^.*data 1/
we can squash everything in one line with ;
Putting it all together, here is a revised version in one line, without the need for the extended regex ( i.e without -r):
sed -n '/HOSTNAME/ { p } ; /^.*data 1/ {p }' hostnames.txt
The OP requirements seem to be very unclear, but the following is consistent with one interpretation of what has been requested, and more importantly, the program has no special requirements, and the code can easily be modified to meet a variety of requirements. In particular, both search patterns (the HOSTNAME pattern and the "data 1" pattern) can easily be parameterized.
The main idea is to print all lines in a specified subsection, or at least a certain number up to some limit.
If there is a limit on how many lines in a subsection should be printed, specify a value for limit, otherwise set it to 0.
awk -v limit=0 '
/^HOSTNAME:/ { subheader=0; hostname=1; print; next}
/^ *data 1/ { subheader=1; print; next }
/^ *data / { subheader=0; next }
subheader && (limit==0 || (subheader++ < limit)) { print }'
Given the lines provided in the question, the output would be:
HOSTNAME:host1
data 1
HOSTNAME:host-2
data 1
(Yes, I know the variable 'hostname' in the awk program is currently unused, but I included it to make it easy to add a test to satisfy certain obvious requirements regarding the preconditions for identifying a subheader.)
sed -n -e '/hostname/,+p' -e '/Duplex/,+p'
The simplest way to do it is to combine two sed commands ..

In-line replacement bash (replace line with new one using variables)

I'm going through and reading lines from a file. They have a ton of information that is unnecessary, and I want to reformat the lines for later use so that I can use the necessary information later.
Example line in file (file1)
Name: *name* Date: *date* Age: *age* Gender: *gender* Score: *score*
Say I want to just pull gender and age from the file and use that later
New line
*gender*, *age*
In bash:
while read line; do
<store variable for gender>
<store variable for age>
<overwrite each line in CSV - gender,age>
<use gender/age as inputs for later comparisons>
done < file1
EDIT: There is no stability in the entries. One value can be found using a echo $line | cut and the other value is found using a [ $line =~ "keyValue" ] then setting that value
I was thinking of storing the combination of the two variables as such:
newLine="$val1,$val2"
Then using a sed in-line replace to replace the $line with $newLine.
Is there a better way, though? It may come down to a sed formatting issue with variables.
This will produce your desired output from your posted sample input:
$ cat file
Name: *name* Date: *date* Age: *age* Gender: *gender* Score: *score*
$ awk -F'[: ]+' -v OFS=', ' '{for (i=1;i<NF;i+=2) a[$i]=$(i+1); print a["Gender"], a["Age"]}' file
*gender*, *age*
$ awk -F'[: ]+' -v OFS=', ' '{for (i=1;i<NF;i+=2) a[$i]=$(i+1); print a["Score"], a["Name"], a["Date"] }' file
*score*, *name*, *date*
and you can see above how easy it is to print whatever fields you like in whatever order you like.
If it's not what you want, post some more representative input.
Your example leaves room for interpretation, so I'm assuming that there may be whitespace in the field values, but that there are no colons in the field values and that each field key is followed by a colon. I also assume that the order is stable.
while IFS=: read _ _ _ age gender _; do
age="${age% Gender}" # Use parameter expansion to strip off the key for the *next* field.
gender="${gender% Score}"
printf '"%s","%s"\n' "$gender" "$age"
done < file1 > file1.csv
Update
Since your question now states that there is no stability, you have to iterate through the possible values to get your output:
while IFS=: read -a line; do
unset age key sex
for chunk in "${line[#]}"; do
val="${chunk% *}" # Everything but the key
case "$key" in
Age) age="$val";;
Gender) sex="$val";;
esac
# The key is for the *next* iteration.
key="${chunk##* }"
done
if [[ $age || $sex ]]; then
printf '"%s","%s"\n' "$sex" "$age"
fi
done < file1 > file1.csv
(Also I added quotes around the output values in the csv to be compliant with the actual csv format and in case sex or age happened to have commas in it. Maybe someone is 1,000,000 years old. ;)

Bash: How to keep lines in a file that have fields that match lines in another file?

I have two big files with a lot of text, and what I have to do is keep all lines in file A that have a field that matches a field in file B.
file A is something like:
Name (tab) # (tab) # (tab) KEYFIELD (tab) Other fields
file B I managed to use cut and sed and other things to basically get it down to one field that is a list.
So The goal is to keep all lines in file A in the 4th field (it says KEYFIELD) if the field for that line matches one of the lines in file B. (Does NOT have to be an exact match, so if file B had Blah and file A said Blah_blah, it'd be ok)
I tried to do:
grep -f fileBcutdown fileA > outputfile
EDIT: Ok I give up. I just force killed it.
Is there a better way to do this? File A is 13.7MB and file B after cutting it down is 32.6MB for anyone that cares.
EDIT: This is an example line in file A:
chr21 33025905 33031813 ENST00000449339.1 0 - 33031813 33031813 0 3 1835,294,104, 0,4341,5804,
example line from file B cut down:
ENST00000111111
Here's one way using GNU awk. Run like:
awk -f script.awk fileB.txt fileA.txt
Contents of script.awk:
FNR==NR {
array[$0]++
next
}
{
line = $4
sub(/\.[0-9]+$/, "", line)
if (line in array) {
print
}
}
Alternatively, here's the one-liner:
awk 'FNR==NR { array[$0]++; next } { line = $4; sub(/\.[0-9]+$/, "", line); if (line in array) print }' fileB.txt fileA.txt
GNU awk can also perform the pre-processing of fileB.txt that you described using cut and sed. If you would like me to build this into the above script, you will need to provide an example of what this line looks like.
UPDATE using files HumanGenCodeV12 and GenBasicV12:
Run like:
awk -f script.awk HumanGenCodeV12 GenBasicV12 > output.txt
Contents of script.awk:
FNR==NR {
gsub(/[^[:alnum:]]/,"",$12)
array[$12]++
next
}
{
line = $4
sub(/\.[0-9]+$/, "", line)
if (line in array) {
print
}
}
This successfully prints lines in GenBasicV12 that can be found in HumanGenCodeV12. The output file (output.txt) contains 65340 lines. The script takes less than 10 seconds to complete.
You're hitting the limit of using the basic shell tools. Assuming about 40 characters per line, File A has 400,000 lines in it and File B has about 1,200,000 lines in it. You're basically running grep for each line in File A and having grep plow through 1,200,000 lines with each execution. that's 480 BILLION lines you're parsing through. Unix tools are surprisingly quick, but even something fast done 480 billion times will add up.
You would be better off using a full programming scripting language like Perl or Python. You put all lines in File B in a hash. You take each line in File A, check to see if that fourth field matches something in the hash.
Reading in a few hundred thousand lines? Creating a 10,000,000 entry hash? Perl can parse both of those in a matter of minutes.
Something -- off the top of my head. You didn't give us much in the way of spects, so I didn't do any testing:
#! /usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw(say);
# Create your index
open my $file_b, "<", "file_b.txt";
my %index;
while (my $line = <$file_b>) {
chomp $line;
$index{$line} = $line; #Or however you do it...
}
close $file_b;
#
# Now check against file_a.txt
#
open my $file_a, "<", "file_a.txt";
while (my $line = <$file_a>) {
chomp $line;
my #fields = split /\s+/, $line;
if (exists $index{$field[3]}) {
say "Line: $line";
}
}
close $file_a;
The hash means you only have to read through file_b once instead of 400,000 times. Start the program, go grab a cup of coffee from the office kitchen. (Yum! non-dairy creamer!) By the time you get back to your desk, it'll be done.
grep -f seems to be very slow even for medium sized pattern files (< 1MB). I guess it tries every pattern for each line in the input stream.
A solution, which was faster for me, was to use a while loop. This assumes that fileA is reasonably small (it is the smaller one in your example), so iterating multiple times over the smaller file is preferable over iterating the larger file multiple times.
while read line; do
grep -F "$line" fileA
done < fileBcutdown > outputfile
Note that this loop will output a line several times if it matches multiple patterns. To work around this limitation use sort -u, but this might be slower by quite a bit. You have to try.
while read line; do
grep -F "$line" fileA
done < fileBcutdown | sort -u | outputfile
If you depend on the order of the lines, then I don't think you have any other option than using grep -f. But basically it boils down to trying m*n pattern matches.
use the below command:
awk 'FNR==NR{a[$0];next}($4 in a)' <your filtered fileB with single field> fileA

Resources