How to read a file backwards on Linux? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know that I can use cat to print all content from a file from beginning to end on Linux.
Is there a way for doing that backward (last line first)?

Yes, you can use "tac" command.
From man tac:
Usage: tac [OPTION]... [FILE]...
Write each FILE to standard output, last line first.
With no FILE, or when FILE is -, read standard input.
Mandatory arguments to long options are mandatory for short options too.
-b, --before attach the separator before instead of after
-r, --regex interpret the separator as a regular expression
-s, --separator=STRING use STRING as the separator instead of newline
--help display this help and exit
--version output version information and exit

sed '1!G;h;$!d' file
sed -n '1!G;h;$p' file
perl -e 'print reverse <>' file
awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file

tac is one way, but not default available on all linux.
awk could do it like:
awk '{a[NR]=$0}END{for(i=NR;i>=1;i--)print a[i]}' file

Related

Replace path in specific line number of file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
i have a file which contain :
Source defaults file; edit that file to configure this script.
AUTOSTART="all"
STATUSREFRESH=10
OMIT_SENDSIGS=0
if test -e /etc/default/openvpn ; then
. /etc/default/openvpn
fi
i want to change the path /etc/default/openvpn in line 5 to /mnt/data/default/openvpn
the same thing about line 6.
I couldn't using sed -i '5s/etc/default...' ,
and with awk i can't replace the result in the file.
any one have a idea please ?
Thank you.
commands tried :
var1='/etc/default/openvpn'
var2='/mnt/data/default/openvpn'
sed -i '5s/'$var'/'$var2'/' files.txt
sed -i '5s/etc/default/openvpn/mnt/data/default/openvpn/' files.txt
sed -i '5s/'/etc/default/openvpn'/'/mnt/data/default/openvpn'/g' files.txt
awk 'NR==5 { sub("/etc/default/openvpn", "/etc/default/openvpn", $0); print }' files.txt
with awk, i can't save changes in the file
The issue here would be the delimiter in use as it will conflict with sed's default delimiter.
To resolve this, you can change the delimiter in use to any other character that does not appear in your data or escaping the default delimiter \/.
Using sed
$ sed -i.bak 's|/etc/default/openvpn|/mnt/data/default/openvpn|' input_file
$ cat input_file
Source defaults file; edit that file to configure this script.
AUTOSTART=all
STATUSREFRESH=10
OMIT_SENDSIGS=0
if test -e /mnt/data/default/openvpn ; then
. /mnt/data/default/openvpn
fi

How to remove 'www.' with awk in output file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How can I remove all the 'www.' with awk in my output file.
e.g.: my output file has multiple sites like
abc.com
www.def.com
blabla.org
www.zxc.net
I would like to remove all the www. in my output file:
abc.com
def.com
blabla.org
zxc.net
Probably better done in sed:
sed -i 's/^www\.//g' outputFile
In awk:
awk '{gsub(/^www\./,"",$0)}1' outputFile
This is probably what you're looking for:
$ cat file
abc.com
www.def.com
blabla.org
www.zxc.net
www.org
www.acl.lanl.gov
$ sed -E 's/^www\.(([^.]+(\.|$)){2,})/\1/' file
abc.com
def.com
blabla.org
zxc.net
www.org
acl.lanl.gov
The above uses a sed that has -E for ERE support, e.g. GNU or OSX sed. Note the need for a more comprehensive input file to test if a proposed solution really works or not.

Grep the most recent value of a particular column from a CSV file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
"cola","colb","colc","cold","cole","colf"
"a","b","c","d","e","f"
"a1","b1","c1","d1","e1","f1"
"a2","b2","c2","d2","e2","f2"
Assuming this is the CSV file, I want to grep the value "e" from the column "cole" and store it into a shell variable. And then use the shell variable as a part of a wget command.
How would I do this?
set -f # disable globbing
variable="$(awk 'NR==2 {print $5}' file)"
set +f
Awk is well suited to this. If you know the column number you can simply do:
$ awk 'NR==2{print $5}' file.csv
e
This will print the fifth field on the second line. If you want to use the column name then:
$ awk 'NR==1{for(i=1;i<=NF;i++)c[$i]=i}NR==2{print $c[col]}' col="cole" file.csv
e
Just set col="<name of column to use>".
You can use command substitution to store the value in variable:
$ val="$(awk 'NR==2{print $5}' file.csv)"
$ wget --what-ever-option "$val"
Or just use it in place:
$ wget --what-ever-option "$(awk 'NR==2{print $5}' file.csv)"

Change the path address in a text file by shell scripting [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In my Bash script, I have to change a name to a path address(new address) in a text file:
(MYADDREES) change to ( /home/run1/c1 ) and save it as new file.
I did like this: defined a new variable = new address and tried to replace it in previous address in text file.
I use sed but it has problem.
My script was:
#!/bin/bash
# To debug
set -x
x=`pwd`
echo $x
sed "s/MYADDRESS/$x/g" < sample1.txt > new.txt
exit
The output of pwd is likely to contain / characters, making your sed expression look something like s/MYADDRESS//home/user/somewhere/. This makes it impossible for sed to sort out what should be replaced with what. There are two solutions:
Use a different delimiter for sed:
sed "s,MYADDRESS,$x,g" < sample1.txt > new.txt
...although this will have the same problem if the current path contains a comma character or something else that is a special character for sed, so the more robust approach is to use awk instead:
awk -v curdir="$(pwd)" '{ gsub("MYADDRESS", curdir); print }' < sample1.txt > new.txt

How to use Linux to read a file line by line and replace all the spaces into ','? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am a beginner.. I'd like to use Linux shell to make the following file
1 2 2
2 3 4
4 5 2
4 2 1
....
into
1,2,2
2,3,4
4,5,2
4,2,1
Thank you very much!
Are you looking for something like this:-
sed -e "s/ /,/g" < a.txt
or may be easier like this:
tr ' ' ',' <input >output
or in Vim you can use the Regex:
s/ /,/g
The question asks "line by line". In bash :
while read line; do echo $line | sed 's/ /,/g'; done < file
It will read file line by line into line, print (echo) each line and pipe (|) it to sed which will change spaces into commas. You can add > newfile at the end (but > file won't work) if you need to store it in a file.
But if you don't need anything else than changing characters in the file, processing the whole file at once is easier and probably quicker :
sed -i 's/ /,/g' file
(option -i is for modifying the file directly, as opposed to print modifications to stdout).
Read more about sed to understand its syntax, you'll need it eventually.

Resources