Repeat-replace fields in single lines within text files - cygwin

I'm trying to batch replace a delimited field in every line of many text files (actually *.dlf) in the following way:
xxx|xxx|a|xxx|DD|..
with
xxx|xxx|a|xxx|a|..., etc.
Is there a syntax in cygwin or other DOS shell that I can used to do this, or even an app which will serve? Processing indirectly with Excel and Word is tiresome.

You can use gawk to replace specific fields with new values:
sh-3.2$ echo "xxx|xxx|a|xxx|DD|.." | gawk -v FS="|" -v OFS="|" '{$5="a"; print}'
xxx|xxx|a|xxx|a|..

Related

How to truncate rest of the text in a file after finding a specific text pattern, in unix?

I have a HTML PAGE which I have extracted in unix using wget command, in that after the word "Check list" I need to remove all of the text and with the remaining I am trying to grep some data. I am unable to think on a way which can be helpful for removing the text after a keyword. if I do
s/Check list.*//g
It just removes the line , I want everything below that to be gone. How do I perform this?
The other solutions you have so far require non-POSIX-mandatory tools (GNU sed, GNU awk, or perl) so YMMV with their availability and will read the whole file into memory at once.
These will work in any awk in any shell on every Unix box and only read 1 line at a time into memory:
awk -F 'Check list' '{print $1} NF>1{exit}' file
or:
awk 'sub(/Check list.*/,""){f=1} {print} f{exit}' file
With GNU awk for multi-char RS you could do:
awk -v RS='Check list' '{print; exit}' file
but that would still read all of the text before Check list into memory at once.
Depending on which sed version you have, maybe
sed -z 's/Check list.*//'
The /g flag is useless as you only want to replace everything once.
If your sed does not have the -z option (which says to use the ASCII null character as line terminator instead of newline; this hinges on your file not containing any actual nulls, but that should trivially be true for any text file), try Perl:
perl -0777 -pe 's/Check list.*//s'
Unlike sed -z, this explicitly says to slurp the entire file into memory (the argument to -0 is the octal character code of a terminator character, but 777 is not a valid terminator character at all, so it always reads the entire file as a single "line") so this works even if there are spurious nulls in your file. The final s flag says to include newline in what . matches (otherwise s/.*// would still only substitute on the matching physical line).
I assume you are aware that removing everything will violate the integrity of the HTML file; it needs there to be a closing tag for every start tag near the beginning of the document (so if it starts with <html><body> you should keep </body></html> just before the end of the file, for example).
With awk you could make use of RS variable and then set field separator to regex with word boundaries and then print the very first field as per need.
awk -v RS="^$" -v FS='\\<check_list\\>' '{print $1}' Input_file
You might use q to instruct GNU sed to quit, thus ending processing, consider following simple example, let file.txt content be
123
456
789
and say you want to jettison everything beyond 5, then you could do
sed '/5/{s/5.*//;q}' file.txt
which gives output
123
4
Explanation: for line having 5, substitute 5 and everything beyond it with empty string (i.e. delete it), then q. Observe that lowercase q is used to provide printing of altered line before quiting.
(tested in GNU sed 4.7)

Split flat file and add delimiter in Linux

I would like how to improve a code that I have.
My shell script reads a flat file, and split it in two files based on first char of each line, header and detail. For header the first char is 1 and for detail is 2. Splitted files does not include the firts char.
Header is delimited by "|", and detail is fixed-width, so, I add the delimiter to it alter.
What I want is to do this in one single awk, to avoid creating a tmp file.
For splitting file I use and awk command, and for adding delimiter another awk command.
This is what I have now:
Input=Input.txt
Header=Header.txt
DetailTmp=DetailTmp.txt
Detail=Detail.txt
#First I split in two files and remove first char
awk -v vFileHeader="$Header" -v vFileDetail="$DetailTmp" '/^1/ {f=vFileHeader} /^2/ {f=vFileDetail} {sub(/^./,""); print > f}' $Input
#Then, I add the delimiter to detail
awk '{OFS="|"};{print substr($1,1,10),substr($1,11,5),substr($1,16,2),substr($1,18,14),substr($1,32,4),substr($1,36,18),substr($1,54,1)}' $DetailTmp > $Detail
Any suggestion?
Input.txt file
120190301|0170117174|FRANK|DURAND|USA
2017011717400052082911070900000000000000000000091430200
120190301|0170117204|ERICK|SMITH|USA
2017011720400052082911070900000000000000000000056311910
Header.txt splitted
20190301|0170117174|FRANK|DURAND|USA
20190301|0170117204|ERICK|SMITH|USA
DetailTmp.txt splitted
017011717400052082911070900000000000000000000091430200
017011720400052082911070900000000000000000000056311910
017011727100052052911070900000000000000000000008250000
017011718200052082911070900000000000000000000008102500
017011726300052052911070900000000000000000000008250000
Detail.txt desired
0170117174|00052|08|29110709000000|0000|000000000009143020|0
0170117204|00052|08|29110709000000|0000|000000000005631191|0
0170117271|00052|05|29110709000000|0000|000000000000825000|0
0170117182|00052|08|29110709000000|0000|000000000000810250|0
0170117263|00052|05|29110709000000|0000|000000000000825000|0
just combine the scripts
$ awk -v OFS='|' '/^1/{print substr($0,2) > "header"}
/^2/{print substr($0,2,10),substr($0,11,5),... > "detail"}' file
however, you may be better off, using FIELDWIDTHS on the detail file on the second pass.

Generate record of files which have been removed by grep as a secondary function of primary command

I asked a question here to remove unwanted lines which contained strings which matched a particular pattern:
Remove lines containg string followed by x number of numbers
anubhava provided a good line of code which met my needs perfectly. This code removes any line which contains the string vol followed by a space and three or more consecutive numbers:
grep -Ev '\bvol([[:blank:]]+[[:digit:]]+){2}' file > newfile
The command will be used on a fairly large csv file and be initiated by crontab. For this reason, I would like to keep a record of the lines this command is removing, just so I can go back to check the correct data that is being removed- I guess it will be some sort of log containing the name sof the lines that did not make the final cut. How can I add this functionality?
Drop grep and use awk instead:
awk '/\<vol([[:blank:]]+[[:digit:]]+){2}/{print >> "deleted"; next} 1' file
The above uses GNU awk for word delimiters (\<) and will append every deleted line to a file named "deleted". Consider adding a timestamp too:
awk '/\<vol([[:blank:]]+[[:digit:]]+){2}/{print systime(), $0 >> "deleted"; next} 1' file

Change date format in first column using awk/sed

I have a shell script which is automatically ran each morning which appends that days results to a text file. The file should have todays date on the first column followed by results separated by commas. I use the command date +%x to get the day in the required format (dd/mm/yy). However on one computer date +%x returns mm/dd/yyyy ( any idea why this is the case?). I then sort the data in the file in date order.
Here is a snippet of such a text file
29/11/12,9654.80,194.32,2.01,7.19,-7.89,7.65,7.57,3.98,9625.27,160.10,1.66,4.90,-4.79,6.83,4.84,3.54
03/12/12,5184.22,104.63,2.02,6.88,-6.49,7.87,6.67,4.10,5169.52,93.81,1.81,5.29,-5.45,7.87,5.37,4.10
04/12/12,5183.65,103.18,1.99,6.49,-6.80,8.40,6.66,4.38,5166.04,95.44,1.85,6.04,-6.49,8.40,6.28,4.38
11/07/2012,5183.65,102.15,1.97,6.78,-6.36,8.92,6.56,4.67,5169.48,96.67,1.87,5.56,-6.10,8.92,5.85,4.67
07/11/2012,5179.39,115.57,2.23,7.64,-6.61,8.83,7.09,4.62,5150.17,103.52,2.01,7.01,-6.08,8.16,6.51,4.26
11/26/2012,5182.66,103.30,1.99,7.07,-5.76,7.38,6.37,3.83,5162.81,95.47,1.85,6.34,-5.40,6.65,5.84,3.44
11/30/2012,5180.82,95.19,1.84,6.51,-5.40,7.91,5.92,4.12,5163.98,91.82,1.78,5.58,-5.07,7.05,5.31,3.65
Is it possible to change the date format for the latter four lines to the correct date format using awk or sed? I only wish to change the date format for those in the form mm/dd/yyyy to dd/mm/yy.
It looks like you're using two different flavors (versions) of date. To check which versions you've got, I think GNU date accepts the --version flag whereas other versions, like BSD/OSX will not accept this flag.
Since you may be using completely different systems, it's probably safest to avoid date completely and use perl to print the current date:
perl -MPOSIX -e 'print POSIX::strftime("%d/%m/%y", localtime) . "\n"'
If you are sure you have GNU awk on both machines, you could use it like this:
awk 'BEGIN { print strftime("%d/%m/%y") }'
To fix the file you've got, here's my take using GNU awk:
awk '{ print gensub(/^(..\/)(..\/)..(..,)/, "\\2\\1\\3", "g"); next }1' file
Or using sed:
sed 's/^\(..\/\)\(..\/\)..\(..,\)/\2\1\3/' file
Results:
29/11/12,9654.80,194.32,2.01,7.19,-7.89,7.65,7.57,3.98,9625.27,160.10,1.66,4.90,-4.79,6.83,4.84,3.54
03/12/12,5184.22,104.63,2.02,6.88,-6.49,7.87,6.67,4.10,5169.52,93.81,1.81,5.29,-5.45,7.87,5.37,4.10
04/12/12,5183.65,103.18,1.99,6.49,-6.80,8.40,6.66,4.38,5166.04,95.44,1.85,6.04,-6.49,8.40,6.28,4.38
07/11/12,5183.65,102.15,1.97,6.78,-6.36,8.92,6.56,4.67,5169.48,96.67,1.87,5.56,-6.10,8.92,5.85,4.67
11/07/12,5179.39,115.57,2.23,7.64,-6.61,8.83,7.09,4.62,5150.17,103.52,2.01,7.01,-6.08,8.16,6.51,4.26
26/11/12,5182.66,103.30,1.99,7.07,-5.76,7.38,6.37,3.83,5162.81,95.47,1.85,6.34,-5.40,6.65,5.84,3.44
30/11/12,5180.82,95.19,1.84,6.51,-5.40,7.91,5.92,4.12,5163.98,91.82,1.78,5.58,-5.07,7.05,5.31,3.65
This should work: sed -re 's/^([0-9][0-9])\/([0-9][0-9])\/[0-9][0-9]([0-9][0-9])(.*)$/\2\/\1\/\3\4/'
It can be made smaller but I made it so it would be more obvious what it does (4 groups, just switching month/day and removing first two chars of the year).
Tip: If you don't want to cat the file you could to the changes in place with sed -i. But be careful if you put a faulty expression in you might end up breaking your source file.
NOTE: This assumes that IF the year is specified with 4 digits, the month/day is reversed.
This below command will do it.
Note:No matter how many number of lines are present in the file.this will just change the last 4 lines.
tail -r your_file| awk -F, 'NR<5{split($1,a,"/");$1=a[2]"/"a[1]"/"a[3];print}1'|tail -r
Well i could figure out some way without using pipes and using a single awk statement and this solution does need a tail command:
awk -F, 'BEGIN{cmd="wc -l your_file";while (cmd|getline tmp);split(tmp,x)}x[1]-NR<=4{split($1,a,"/");$1=a[2]"/"a[1]"/"a[3];print}1' your_file
Another solution:
awk -F/ 'NR<4;NR>3{a=$1;$1=$2;$2=a; print $1"/"$2"/" substr($3,3,2) substr($3,5)}' file
Using awk:
$ awk -F/ 'NR>3{x=$1;$1=$2;$2=x}1' OFS="/" file
By using the / as the delimiter, all you need to do is swap the 1st and 2nd fields which is done here using a temporary variable.

Linux scripting: Search a specific column for a keyword

I have a large text file that contains multiple columns of data. I'm trying to write a script that accepts a column number and keyword from the command line and searches for any hits before displaying the entire row of any matches.
I've been trying something along the lines of:
grep $fileName | awk '{if ($'$columnNumber' == '$searchTerm') print $0;}'
But this doesn't work at all. Am I on the right lines? Thanks for any help!
The -v option can be used to pass shell variables to awk command.
The following may be what you're looking for:
awk -v s=$SEARCH -v c=$COLUMN '$c == s { print $0 }' file.txt
EDIT:
I am always trying to write more elegant and tighter code. So here's what Dennis means:
awk -v s="$search" -v c="$column" '$c == s { print $0 }' file.txt
Looks reasonable enough. Try using set -x to look at exactly what's being passed to awk. You can also use different and/or more awk things, including getting rid of the separate grep:
awk -v colnum=$columnNumber -v require="$searchTerm"
"/$fileName/ { if (\$colnum == require) print }"
which works by setting awk variables (colnum and require, in this case) and then using the literal string $colnum to get the desired field, and the variable require to get the required-string.
Note that in all cases (with or without the grep command), any regular expression meta-characters in $fileName will be meta-y, e.g., this.that will match the file named this.that but also the file named thisXthat.

Resources