Bash: Read in file, edit line, output to new file - linux

I am new to linux and new to scripting. I am working in a linux environment using bash. I need to do the following things:
1. read a txt file line by line
2. delete the first line
3. remove the middle part of each line after the first
4. copy the changes to a new txt file
Each line after the first has three sections, the first always ends in .pdf and the third always begins with R0 but the middle section has no consistency.
Example of 2 lines in the file:
R01234567_High Transcript_01234567.pdf High School Transcript R01234567
R01891023_Application_01891023127.pdf Application R01891023
Here is what I have so far. I'm just reading the file, printing it to screen and copying it to another file.
#! /bin/bash
cd /usr/local/bin;
#echo "list of files:";
#ls;
for index in *.txt;
do echo "file: ${index}";
echo "reading..."
exec<${index}
value=0
while read line
do
#value='expr ${value} +1';
echo ${line};
done
echo "read done for ${index}";
cp ${index} /usr/local/bin/test2;
echo "file ${index} moved to test2";
done
So my question is, how can I delete the middle bit of each line, after .pdf but before the R0...?

Using sed:
sed 's/^\(.*\.pdf\).*\(R0.*\)$/\1 \2/g' file.txt
This will remove everything between .pdf and R0 and replace it with single space.
Result for your example:
R01234567_High Transcript_01234567.pdf R01234567
R01891023_Application_01891023127.pdf R01891023

The Hard, Unreliable Way
It's a bit verbose, and much less terse and efficient than what would make sense if we knew that the fields were separated by tab literals, but the following loop does this processing in pure native bash with no external tools:
shopt -s extglob
while IFS= read -r line; do
[[ $line = *".pdf"*R0* ]] || continue # ignore lines that don't fit our format
filename=${line%%.pdf*}.pdf
id=R0${line##*R0}
printf '%s\t%s\n' "$filename" "$id"
done
${line%%.pdf*} returns everything before the first .pdf in the line; ${line%%.pdf*}.pdf then appends .pdf to that content.
Similarly, ${line##*R0} expands to everything after the last R0; R0${line##*R0} thus expands to the final field starting with R0 (presuming that that's the only instance of that string in the file).
The Easy Way (Using Tab Delimiters)
If cat -t file (on MacOS) or cat -A file (on Linux) shows ^I sequences between the fields (but not within the fields), use the following instead:
while IFS=$'\t' read -r filename title id; do
printf '%s\t%s\n' "$filename" "$id"
done
This reads the three tab separated fields into variables named filename, title and id, and emits the filename and id fields.

Updated answer assuming tab delim
Since there is a tab delimiter, then this is a cinch for awk. Borrowing from my originally deleted answer and #geek1011 deleted answer:
awk -F"\t" '{print $1, $NF}' infile.txt
Here awk splits each record in your file by tab, then prints the first field $1 and the last field $NF where NF is the built in awk variable for the record's Number of Fields; by prepending a dollar sign, it says "The value of the last field in the record".
Original answer assuming space delimiter
Leaving this here in case someone has space delimited nonsense like I originally assumed.
You can use awk instead of using bash to read through the file:
awk 'NR>1{for(i=1; $i!~/pdf/; ++i) firstRec=firstRec" "$i} NR>1{print firstRec,$i,$NF}' yourfile.txt
awk reads files line by line and processes each record it comes across. Fields are delimited automatically by white space. The first field is $1, the second is $2 and so on. awk has built in variables; here we use NF which is the Number of Fields contained in the record, and NR which is the record number currently being processed.
This script does the following:
If the record number is greater than 1 (not the header) then
Loop through each field (separated by white space here) until we find a field that has "pdf" in it ($i!~/pdf/). Store everything we find up until that field in a variable called firstRec separated by a space (firstRec=firstRec" "$i).
print out the firstRec, then print out whatever field we stopped iterating on (the one that contains "pdf") which is $i, and finally print out the last field in the record, which is $NF (print firstRec,$i,$NF)
You can direct this to another file:
awk 'NR>1{for(i=1; $i!~/pdf/; ++i) firstRec=firstRec" "$i} NR>1{print firstRec,$i,$NF}' yourfile.txt > outfile.txt
sed may be a cleaner way of going here since, if your pdf file has more than one space separating characters, then you will lose the multiple spaces.

You can use sed on each line like that:
line="R01234567_High Transcript_01234567.pdf High School Transcript R01234567"
echo "$line" | sed 's/\.pdf.*R0/\.pdf R0/'
# output
R01234567_High Transcript_01234567.pdf R01234567
This replace anything between .pdf and R0 with a spacebar.
It doesn't deal with some edge cases but it simple and clear

Related

Bash for loop not writing to file

I often work like this:
for skra in `ls *txt` ; do paste foo.csv <(cut -f 5 $skra) > foo.csv; done
for looping through a directory by using 'ls'
Now I don't understand why this command does not add column to foo.csv in every loop
What is happening under the hood? Seems like foo.csv is not saved in every iteration
The output I get is field 5 from the last file. Not even the original foo.csv as I get if I only paste foo.csv bar.txt
EDIT:
All files are tab delimited
foo.csv is just one column in the beginning
example.txt as seen in vim with set list:
(101,6352)(11174,51391)(10000,60000)^INC_044048.1^I35000^I6253^I0.038250$
(668,7819)(23384,69939)(20000,70000)^INC_044048.1^I45000^I7153^I0.034164$
(2279,8111)(32691,73588)(30000,80000)^INC_044048.1^I55000^I5834^I0.031908$
Here is a python script that does what I want:
import pandas
rammi=[]
with open('window.list') as f:
for line in f:
nafn=line.strip()
df=pandas.read_csv(nafn, header=None, names=[nafn], sep='\t', usecols=[4])
rammi.append(df)
frame = pandas.concat(rammi, axis=1)
frame.to_csv('rammi.allra', sep='\t', encoding='utf-8')
Paste column 4 from all files to one (initially I wanted to retain one original column but it was not necessary). The question was about bash not wanting to update stdin in the for loop.
As already noted in the comments, opening foo.csv for output will truncate it in most shells. (Even if that was not the case, opening the file and running cut and paste repeatedly looks quite inefficient.)
If you don’t mind keeping all the data in memory at one point in time, a simple AWK or Bash script can do this type of processing without any further processes such as cut or paste.
awk -F'\t' ' { lines[FNR] = lines[FNR] "\t" $5 }
END { for (l in lines) print substr(lines[l], 2) }' \
*.txt > foo.csv
(The output should not be called .csv, but I’m sticking with the naming from the question nonetheless.)
Actually, one doesn’t really need awk for this, Bash will do:
#!/bin/bash
lines=()
for file in *.txt; do
declare -i i=0
while IFS=$'\t' read -ra line; do
lines[i++]+=$'\t'"${line[4]}"
done < "$file"
done
printf '%s\n' "${lines[#]/#?}" > foo.csv
(As a side note, "${lines[#]:1}" would remove the first line, not the first (\t) character of each line. (This particular expansion syntax works differently for strings (scalars) and arrays in Bash.) Hence "${lines[#]/#?}" (another way to express the removal of the first character), which does get applied to each array element.)

How to get first word of every line and pipe it into dmenu script

I have a text file like this:
first state
second state
third state
Getting the first word from every line isn't difficult, but the problem comes when adding the extra \n required to separate every word (selection) in dmenu, per its syntax:
echo -e "first\nsecond\nthird" | dmenu
I haven't been able to figure out how to add the separating \n. I've tried this:
state=$(awk '{for(i=1;i<=NF;i+=2)print $(i)'\n'}' text.txt)
But it doesn't work. I also tried this:
lol=$(grep -o "^\S*" states.txt | perl -ne 'print "$_"')
But same deal. Not sure what I'm doing wrong.
Your problem is in the AWK script. You need to identify each input line as a record. This way, you can control how each record in the output is separated via the ORS variable (output record separator). By default this separator is the newline, which should be good enough for your purpose.
Now to print the first word of every input record (each line in the input stream in this case), you just need to print the first field:
awk '{print $1}' textfile | dmenu
If you need the output to include the explicit \n string (not the control character), then you can just overwrite the ORS variable to fit your needs:
awk 'BEGIN{ORS="\\n"}{print $1}' textfile | dmenu
This could be more easily done in while loop, could you please try following. This is simple, while is reading the file and during that its creating 2 variables 1st is named first and other is rest first contains first field which we are passing to dmenu later inside.
while read first rest
do
dmenu "$first"
done < "Input_file"
Based on the text file example, the following should achieve what you require:
awk '{ printf "%s\\n",$1 }' textfile | dmenu
Print the first space separated field of each line along with \n (\n needs to be escaped to stop it being interpreted by awk)
In your code
state=$(awk '{for(i=1;i<=NF;i+=2)print $(i)'\n'}' text.txt)
you attempted to use ' inside your awk code, however code is what between ' and first following ', therefore code is {for(i=1;i<=NF;i+=2)print $(i) and this does not work. You should use " for strings inside awk code.
If you want to merely get nth column cut will be enough in most cases, let states.txt content be
first state
second state
third state
then you can do:
cut -d ' ' -f 1 states.txt | dmenu
Explanation: treat space as delimiter (-d ' ') and get 1st column (-f 1)
(tested in cut (GNU coreutils) 8.30)

Generate record of files which have been removed by grep as a secondary function of primary command

I asked a question here to remove unwanted lines which contained strings which matched a particular pattern:
Remove lines containg string followed by x number of numbers
anubhava provided a good line of code which met my needs perfectly. This code removes any line which contains the string vol followed by a space and three or more consecutive numbers:
grep -Ev '\bvol([[:blank:]]+[[:digit:]]+){2}' file > newfile
The command will be used on a fairly large csv file and be initiated by crontab. For this reason, I would like to keep a record of the lines this command is removing, just so I can go back to check the correct data that is being removed- I guess it will be some sort of log containing the name sof the lines that did not make the final cut. How can I add this functionality?
Drop grep and use awk instead:
awk '/\<vol([[:blank:]]+[[:digit:]]+){2}/{print >> "deleted"; next} 1' file
The above uses GNU awk for word delimiters (\<) and will append every deleted line to a file named "deleted". Consider adding a timestamp too:
awk '/\<vol([[:blank:]]+[[:digit:]]+){2}/{print systime(), $0 >> "deleted"; next} 1' file

How can I remove the last character of a file in unix?

Say I have some arbitrary multi-line text file:
sometext
moretext
lastline
How can I remove only the last character (the e, not the newline or null) of the file without making the text file invalid?
A simpler approach (outputs to stdout, doesn't update the input file):
sed '$ s/.$//' somefile
$ is a Sed address that matches the last input line only, thus causing the following function call (s/.$//) to be executed on the last line only.
s/.$// replaces the last character on the (in this case last) line with an empty string; i.e., effectively removes the last char. (before the newline) on the line.
. matches any character on the line, and following it with $ anchors the match to the end of the line; note how the use of $ in this regular expression is conceptually related, but technically distinct from the previous use of $ as a Sed address.
Example with stdin input (assumes Bash, Ksh, or Zsh):
$ sed '$ s/.$//' <<< $'line one\nline two'
line one
line tw
To update the input file too (do not use if the input file is a symlink):
sed -i '$ s/.$//' somefile
Note:
On macOS, you'd have to use -i '' instead of just -i; for an overview of the pitfalls associated with -i, see the bottom half of this answer.
If you need to process very large input files and/or performance / disk usage are a concern and you're using GNU utilities (Linux), see ImHere's helpful answer.
truncate
truncate -s-1 file
Removes one (-1) character from the end of the same file. Exactly as a >> will append to the same file.
The problem with this approach is that it doesn't retain a trailing newline if it existed.
The solution is:
if [ -n "$(tail -c1 file)" ] # if the file has not a trailing new line.
then
truncate -s-1 file # remove one char as the question request.
else
truncate -s-2 file # remove the last two characters
echo "" >> file # add the trailing new line back
fi
This works because tail takes the last byte (not char).
It takes almost no time even with big files.
Why not sed
The problem with a sed solution like sed '$ s/.$//' file is that it reads the whole file first (taking a long time with large files), then you need a temporary file (of the same size as the original):
sed '$ s/.$//' file > tempfile
rm file; mv tempfile file
And then move the tempfile to replace the file.
Here's another using ex, which I find not as cryptic as the sed solution:
printf '%s\n' '$' 's/.$//' wq | ex somefile
The $ goes to the last line, the s deletes the last character, and wq is the well known (to vi users) write+quit.
After a whole bunch of playing around with different strategies (and avoiding sed -i or perl), the best way i found to do this was with:
sed '$! { P; D; }; s/.$//' somefile
If the goal is to remove the last character in the last line, this awk should do:
awk '{a[NR]=$0} END {for (i=1;i<NR;i++) print a[i];sub(/.$/,"",a[NR]);print a[NR]}' file
sometext
moretext
lastlin
It store all data into an array, then print it out and change last line.
Just a remark: sed will temporarily remove the file.
So if you are tailing the file, you'll get a "No such file or directory" warning until you reissue the tail command.
EDITED ANSWER
I created a script and put your text inside on my Desktop. this test file is saved as "old_file.txt"
sometext
moretext
lastline
Afterwards I wrote a small script to take the old file and eliminate the last character in the last line
#!/bin/bash
no_of_new_line_characters=`wc '/root/Desktop/old_file.txt'|cut -d ' ' -f2`
let "no_of_lines=no_of_new_line_characters+1"
sed -n 1,"$no_of_new_line_characters"p '/root/Desktop/old_file.txt' > '/root/Desktop/my_new_file'
sed -n "$no_of_lines","$no_of_lines"p '/root/Desktop/old_file.txt'|sed 's/.$//g' >> '/root/Desktop/my_new_file'
opening the new_file I created, showed the output as follows:
sometext
moretext
lastlin
I apologize for my previous answer (wasn't reading carefully)
sed 's/.$//' filename | tee newFilename
This should do your job.
A couple perl solutions, for comparison/reference:
(echo 1a; echo 2b) | perl -e '$_=join("",<>); s/.$//; print'
(echo 1a; echo 2b) | perl -e 'while(<>){ if(eof) {s/.$//}; print }'
I find the first read-whole-file-into-memory approach can be generally quite useful (less so for this particular problem). You can now do regex's which span multiple lines, for example to combine every 3 lines of a certain format into 1 summary line.
For this problem, truncate would be faster and the sed version is shorter to type. Note that truncate requires a file to operate on, not a stream. Normally I find sed to lack the power of perl and I much prefer the extended-regex / perl-regex syntax. But this problem has a nice sed solution.

Unix command to remove everything after first column

I have a text file in which I have something like this-
10.2.57.44 56538154 3028
120.149.20.197 28909678 3166
10.90.158.161 869126135 6025
In that text file, I have around 1,000,000 rows exactly as above. I am working in SunOS environment. I needed a way to remove everything from that text file leaving only IP Address (first column in the above text file is IP Address). So after running some unix command, file should look like something below.
10.2.57.44
120.149.20.197
10.90.158.161
Can anyone please help me out with some Unix command that can remove all the thing leaving only IP Address (first column) and save it back to some file again.
So output should be something like this in some file-
10.2.57.44
120.149.20.197
10.90.158.161
If delimiter is space character use
cut -d " " -f 1 filename
If delimiter is tab character , no need for -d option as tab is default delimiter for cut command
cut -f 1 filename
-d
Delimiter; the character immediately following the -d option is the field delimiter .
-f
Specifies a field list, separated by a delimiter
nawk '{print $1}' file > newFile && mv newFile file
OR
cut -f1 file > newFile && mv newFile file
As you're using SunOS, you'll want to get familiar with nawk (not awk, which is the old, and cranky version of awk, while nawk= new awk ;-).
In either case, you're printing the first field in the file to newFile.
(n)awk is a complete programming language designed for the easy manipulation of text files. The $1 means the first field on each line, $9 would mean the ninth field, etc, while $0 means the whole line. You can tell (n)awk what to use to separate the fields by, it might be a tab char, or a '|' char, or multiple spaces. By default, all versions of awk uses white space, i.e. multiple spaces, or 1 tab to delimit the columns/fields, per line in a file.
For a very good intro to awk, see Grymoire's Awk page
The && means, execute the next command only if the previous command finished without a problem. This way you don't accidentally erase your good data file, becuase of some error.
IHTH
If you have vim , open the file with it. Then in command mode write for substitution (tab or space or whatever is the delimiter) %s:<delimiter>.*$::g. Now save the file with :wq.
Using sed give command like this sed -e 's/<delimiter>.*$//' > file.txt
How about a perl script ;)
#!/usr/bin/perl -w
use strict;
my $file = shift;
die "Missing file or can't read it" unless $file and -r $file;
sub edit_in_place
{
my $file = shift;
my $code = shift;
{
local #ARGV = ($file);
local $^I = '';
while (<>) {
&$code;
}
}
}
edit_in_place $file, sub {
my #columns = split /\s+/;
print "$columns[0]\n";
};
This will edit the file in place since you say it is a large one. You can also create a backup by modifying local $^I = ''; to local $^I = '.bak';
Try this
awk '{$1=$1; print $1}' temp.txt
Output
10.2.57.44
120.149.20.197
10.90.158.161
awk '{ print $1 }' file_name.txt > tmp_file_name.txt
mv tmp_file_name.txt file_name.txt
'> tmp_file_name.txt' means redirecting STDOUT of awk '{ print $1 }' file_name.txt to a file named tmp_file_name.txt
FYI :
$1 means first column based on delimiter. The default delimiter is whitespace
$2 means second column based on delimiter. The default delimiter is whitespace
..
..
$NR means last column based on delimiter. The default delimiter is whitespace
If you want to change delimiter, use awk with -F

Resources