I have a simple text file that has been edited so that it looks like this:
1,47:26:23N,121:15:10W,1641M,T,3 Queens Mtn,
2,48:01:19N,119:56:12W,367M,T,Alta Lake,
4,48:40:19N,121:35:35W,1705M,T,Anderson Butte,
5,48:36:52N,122:15:58W,736M,T,Anderson Mtn,
6,48:55:13N,120:13:41W,2518M,T,Andrew Peak,
8,47:58:06N,119:55:15W,907M,T,Arbuckle Mtn,
11,48:39:49N,121:31:14W,2138M,T,Bacon Peak,
12,48:46:38N,121:48:48W,3176M,T,Baker Mtn,
13,48:57:12N,120:15:34W,2419M,T,Bald Mtn,
I would like to re-edit this file so that it reads:
1,47:26:23N,121:15:10W,1641M,T,3 Queens Mtn,
2,48:01:19N,119:56:12W,367M,T,Alta Lake,
3,48:40:19N,121:35:35W,1705M,T,Anderson Butte,
4,48:36:52N,122:15:58W,736M,T,Anderson Mtn,
5,48:55:13N,120:13:41W,2518M,T,Andrew Peak,
6,47:58:06N,119:55:15W,907M,T,Arbuckle Mtn,
7,48:39:49N,121:31:14W,2138M,T,Bacon Peak,
8,48:46:38N,121:48:48W,3176M,T,Baker Mtn,
9,48:57:12N,120:15:34W,2419M,T,Bald Mtn,
Any help would be very appreciated (and sorry if this is a really obvious question, but after several attempts I'm not making much progress).
Thanks
Chris
# solution 1
paste -d, <(seq $(wc -l <input.txt)) <(cut -d, -f 2- input.txt)
# solution 2
awk -F, -vOFS=, '{$1=NR}1' input.txt
# result
1,47:26:23N,121:15:10W,1641M,T,3 Queens Mtn,
2,48:01:19N,119:56:12W,367M,T,Alta Lake,
3,48:40:19N,121:35:35W,1705M,T,Anderson Butte,
4,48:36:52N,122:15:58W,736M,T,Anderson Mtn,
5,48:55:13N,120:13:41W,2518M,T,Andrew Peak,
6,47:58:06N,119:55:15W,907M,T,Arbuckle Mtn,
7,48:39:49N,121:31:14W,2138M,T,Bacon Peak,
8,48:46:38N,121:48:48W,3176M,T,Baker Mtn,
9,48:57:12N,120:15:34W,2419M,T,Bald Mtn,
Related
I am in the course of preparing a presentation and I want a file to be printed on the screen s l o w l y, while I am commenting on it. Typing cat file.txt | less seems to be an obvious solution but is there another one, more elegant and pleasing to eye?
perl -ne '$|=1; for (split //) { print; select(undef,undef,undef, 0.15) }' file.txt
$|=1 do not buffer.
for (split //) { print; ...for every character printed...
select(undef,undef,undef, 0.15) sleep 0.15 seconds (You can change this value according to your taste and needs).
I have a file where I want to print data to another file except first line data
Data in the list.txt is
Vik
Ram
Raj
Pooja
OFA
JAL
Output should be into new file => fd.txt like this below except first line data 'VIK'
Ram
Raj
Pooja
OFA
JAL
Code not working
find $_filepath -type d > list.txt
for i in 2 3 4 5 .. N
do
echo $i
done<list.txt >>fd.txt
tail -n+2 outputs the last lines starting from the second one.
from https://superuser.com/questions/1071448/tail-head-all-line-except-x-last-first-lines
I have some \n ended text:
She walks, in beauty, like the night
Of cloudless climes, and starry skies
And all that's best, of dark and bright
Meet in her aspect, and her eyes
And I want to find which line has the max number of , and print that line too.
For example, the text above should result as
She walks, in beauty, like the night
Since it has 2 (max among all line) comma's.
I have tried:
cat p.txt | grep ','
but do not know where to go now.
You could use awk:
awk -F, -vmax=0 ' NF > max { max_line = $0; max = NF; } END { print max_line; }' < poem.txt
Note that if the max is not unique this picks the first one with the max count.
try this
awk '-F,' '{if (NF > maxFlds) {maxFlds=NF; maxRec=$0}} ; END {print maxRec}' poem
Output
She walks, in beauty, like the night
Awk works with 'Fields', the -F says use ',' to separate the fields. (The default for F is adjacent whitespace, (space and tabs))
NF means Number of Fields (in the current record). So we're using logic to find the record with the maximum number of Fields, capturing the value of the line '$0', and at the END, we print out the line with the most fields.
It is left undefined what will happen if 2 lines have the same maximum # of commas ;-)
I hope this helps.
FatalError's FS-based solution is nice. Another way I can think of is to remove non-comma characters from the line, then count its length:
[ghoti#pc ~]$ awk '{t=$0; gsub(/[^,]/,""); print length($0), t;}' poem
2 She walks, in beauty, like the night
1 Of cloudless climes, and starry skies
1 And all that's best, of dark and bright
1 Meet in her aspect, and her eyes
[ghoti#pc ~]$
Now we just need to keep track of it:
[ghoti#pc ~]$ awk '{t=$0;gsub(/[^,]/,"");} length($0)>max{max=length($0);line=t} END{print line;}' poem
She walks, in beauty, like the night
[ghoti#pc ~]$
Pure Bash:
declare ln=0 # actual line number
declare maxcomma=0 # max number of commas seen
declare maxline='' # corresponding line
while read line ; do
commas="${line//[^,]/}" # remove all non-commas
if [ ${#commas} -gt $maxcomma ] ; then
maxcomma=${#commas}
maxline="$line"
fi
((ln++))
done < "poem.txt"
echo "${maxline}"
My program should be able to work this way.
Below is the content of the text file named BookDB.txt
The individual are separated by colons(:) and every line in the text file should serve as a set of information and are in the order as stated below.
Title:Author:Price:QtyAvailable:QtySold
Harry Potter - The Half Blood Prince:J.K Rowling:40.30:10:50
The little Red Riding Hood:Dan Lin:40.80:20:10
Harry Potter - The Phoniex:J.K Rowling:50.00:30:20
Harry Potter - The Deathly Hollow:Dan Lin:55.00:33:790
Little Prince:The Prince:15.00:188:9
Lord of The Ring:Johnny Dept:56.80:100:38
I actually intend to
1) Read the file line by line and store it in an array
2) Display it
However I have no idea on how to even start the first one.
From doing research online, below are the codes which I have written up till now.
#!/bin/bash
function fnReadFile()
{
while read inputline
do
bTitle="$(echo $inputline | cut -d: -f1)"
bAuthor="$(echo $inputline | cut -d: -f2)"
bPrice="$(echo $inputline | cut -d: -f3)"
bQtyAvail="$(echo $inputline | cut -d: -f4)"
bQtySold="$(echo $inputline | cut -d: -f5)"
bookArray[Count]=('$bTitle', '$bAuthor', '$bPrice', '$bQtyAvail', '$bQtySold')
Count = Count + 1
done
}
function fnInventorySummaryReport()
{
fnReadFile
echo "Title Author Price Qty Avail. Qty Sold Total Sales"
for t in "${bookArray[#]}"
do
echo $t
done
echo "Done!"
}
if ! [ -f BookDB.txt ] ; then #check existance of bookdb file, create the file if not exist else continue
touch BookDB.txt
fi
"HERE IT WILL THEN BE THE MENU AND CALLING OF THE FUNCTION"
Thanks to those in advance who helped!
Why would you want to read the entire thing into an array? Query the file when you need information:
#!/bin/sh
# untested code:
# print the values of any line that match the pattern given in $1
grep "$1" BookDB.txt |
while IFS=: read Title Author Price QtyAvailable QtySold; do
echo title = $Title
echo author = $Author
done
Unless your text file is very large, it is unlikely that you will need the data in an array. If it is large enough that you need that for performance reasons, you really should not be coding this in sh.
Since your goal here seems to be clear, how about using awk as an alternative to using bash arrays? Often using the right tool for the job makes things a lot easier!
The following awk script should get you something like what you want:
# This will print your headers, formatted the way you had above, but without
# the need for explicit spaces.
BEGIN {
printf "%-22s %-16s %-14s %-15s %-13s %s\n", "Title", "Author", "Price",
"Qty Avail.", "Qty Sold", "Total Sales"
}
# This is described below, and runs for every record (line) of input
{
printf "%-22s %-16s %-14.2f %-15d %-13d %0.2f\n",
substr($1, 1, 22), substr($2, 1, 16), $3, $4, $5, ($3 * $5)
}
The second section of code (between curly braces) runs for every line of input. printf is for formatted output, and uses the given format string to print out each field, denoted by $1, $2, etc. In awk, these variables are used to access the fields of your record (line, in this case). substr() is used to truncate the output, as shown below, but can easily be removed if you don't mind the fields not lining up. I assumed "Total Sales" was supposed to be Price multiplied by Qty Sold, but you can update that easily as well.
Then, you save this file in books.awk invoke this script like so:
$ awk -F: -f books.awk books
Title Author Price Qty Avail. Qty Sold Total Sales
Harry Potter - The Hal J.K Rowling 40.30 10 50 2015.00
The little Red Riding Dan Lin 40.80 20 10 408.00
Harry Potter - The Pho J.K Rowling 50.00 30 20 1000.00
Harry Potter - The Dea Dan Lin 55.00 33 790 43450.00
Little Prince The Prince 15.00 188 9 135.00
Lord of The Ring Johnny Dept 56.80 100 38 2158.40
The -F: tells awk that the fields are separated by colon (:), and -f books.awk tells awk what script to run. Your data is held in books.
Not exactly what you were asking for, but just pointing you toward a (IMO) better tool for this kind of job! awk can be intimidating at first, but it's amazing for jobs that work on records like this!
Is there a standard Linux command i can use to read a file chunk by chunk?
For example, i have a file whose size is 6kB. I want to read/print the first 1kB, and then the 2nd 1kB ...
Seems cat/head/tail wont work in this case.
Thanks very much.
You could do this with read -n in a loop:
while read -r -d '' -n 1024 BYTES; do
echo "$BYTES"
echo "---"
done < file.dat
dd will do it
dd if=your_file of=output_tmp_file bs=1024 count=1 skip=0
And then skip=1 for the second chunk, and so on.
You then just need to read the output_tmp_file to get the chunk.
split can split a file into pieces by given byte count
Are you trying to actually read a text file? Like with your eyes? Try less or more
you can use fmt
eg 10bytes
$ cat file
a quick brown fox jumps over the lazy dog
good lord , oh my gosh
$ tr '\n' ' '<file | fmt -w10 file
a quick
brown fox
jumps
over
the lazy
dog good
lord , oh
my gosh
each line is 10 characters. If you want to read the 2nd chunk, pass it to tools like awk ..eg
$ tr '\n' ' '<file | fmt -w10 | awk 'NR==2' # print 2nd chunk
brown fox
To save each chunk to file, (or you can use split with -b )
$ tr '\n' ' '<file | fmt -w10 | awk '{print $0 > "file_"NR}'