Linux append to single line in file - linux

I have run the below command which outputs the current IP address of eth0 and sends the output to a textfile named ip.txt
ifconfig | grep -A 1 'eth0' | tail -1 | cut -d ':' -f 2 | cut -d ' ' -f 1 > ip.txt
I have a second config file. I want to append the text in the newly created file ip.txt onto the end of line 2 of this file. File 2 has the following data:
[root#******]# cat File2.txt
Device=
IPADDR=
NETWORK=
NETMASK=
.......
I need it to look like this
[root#******]# cat File2.txt
Device=
IPADDR=someip
NETWORK=
NETMASK=
.......
This is probably possible using AWK or sed but I can't seem to get it to work correctly. Can you help?

sed -i "s/IPADDR=/IPADDR=`cat ip.txt`/g" File2.txt

You could try the below awk command,
$ awk -v var=$(awk '{print $0}' ip.txt) '/IPADDR=/{$2=var; print$1$2}!/IPADDR=/{print}' File2.txt
Device=
IPADDR=someip
NETWORK=
NETMASK=

Related

Bash issue with floating point numbers in specific format

(Need in bash linux)I have a file with numbers like this
1.415949602
91.09582241
91.12042924
91.40270349
91.45625033
91.70150341
91.70174342
91.70660043
91.70966213
91.72597066
91.7287678315
91.7398645966
91.7542977976
91.7678146465
91.77196659
91.77299733
abcdefghij
91.7827827
91.78288651
91.7838959
91.7855
91.79080605
91.80103075
91.8050505
sed 's/^91\.//' file (working)
Any way possible I can do these 3 steps?
1st I try this
cat input | tr -d 91. > 1.txt (didnt work)
cat input | tr -d "91." > 1.txt (didnt work)
cat input | tr -d '91.' > 1.txt (didnt work)
then
grep -x '.\{10\}' (working)
then
grep "^[6-9]" (working)
Final 1 line solution
cat input.txt | sed 's/\91.//g' | grep -x '.\{10\}' | grep "^[6-9]" > output.txt
Your "final" solution:
cat input.txt |
sed 's/\91.//g' |
grep -x '.\{10\}' |
grep "^[6-9]" > output.txt
should avoid the useless cat, and also move the backslash in the sed script to the correct place (and I added a ^ anchor and removed the g flag since you don't expect more than one match on a line anyway);
sed 's/^91\.//' input.txt |
grep -x '.\{10\}' |
grep "^[6-9]" > output.txt
You might also be able to get rid of at least one useless grep but at this point, I would switch to Awk:
awk '{ sub(/^91\./, "") } /^[6-9].{9}$/' input.txt >output.txt
The sub() does what your sed replacement did; the final condition says to print lines which match the regex.
The same can conveniently, but less readably, be written in sed:
sed -n 's/^91\.([6-9][0-9]\{9\}\)$/\1/p' input.txt >output.txt
assuming your sed dialect supports BRE regex with repetitions like [0-9]\{9\}.

How can I extract all the services from /etc/services?

I want to extract the services from the file /etc/services. The problem is that when extracting them, I get the following output when entering head file.txt:
acr-nema
afbackup
afbackup
afmbackup
afmbackup
afpovertcp
afpovertcp
afs3-bos 7007
But the desired output should be as follows:
acr-nema 104/udp dicom
afbackup 2988/tcp #
afbackup 2988/udp
afmbackup 2989/tcp #
afmbackup 2989/udp
afpovertcp 548/tcp #
afpovertcp 548/udp
afs3-bos 7007/tcp #
The command that I am entering is the following:
cat /etc/services | sed '/^#/ d' | cut -d ' ' -f 1 | sort | awk '!a[$0]++' > file.txt
give this a try:
awk '$0&&/^[^#]/&&!a[$0]++' /etc/services |sort
btw, don't do cat aFile|awk '...' instead, do awk '...' file
cut -f1 /etc/services | grep '^[^#].*s$' | sort --unique
can be used to get unique services but if you want to store in a different file you can add > file.txt

How to save the output of this awk command to file?

I wanna save this command to another text:
awk '{print $2}'
it extract's from text.
now i wanna save output too another text.
thanks
awk '{ print $2 }' text.txt > outputfile.txt
> => This will redirect STDOUT to a file. If file not exists, it will create it. If file exists it will clear out (in effect) the content and will write new data to it
>> => This means same as above but if file exists, this will append new data to it.
Eg:
$ cat /etc/passwd | awk -F: '{ print $1 }' | tail -10 > output.txt
$ cat output.txt
_warmd
_dovenull
_netstatistics
_avbdeviced
_krb_krbtgt
_krb_kadmin
_krb_changepw
_krb_kerberos
_krb_anonymous
_assetcache
Alternatively you can use the command tee for redirection. The command tee will redirect STDOUT to a specified file as well as the terminal screen
For more about shell redirection goto following link:
http://www.techtrunch.com/scripting/redirections-and-file-descriptors
There is a way to do this from within awk itself (docs)
➜ cat text.txt
line 1
line 2
line three
line 4 4 4
➜ awk '{print $2}' text.txt
1
2
three
4
➜ awk '{print $2 >"text.out"}' text.txt
➜ cat text.out
1
2
three
4
try this command.
cat ORGFILENAME.TXT | awk '{print $2}' > SAVENAME.TXT
thx.

Get N line from unzip -l

I have a jar file, i need to execute the files in it in Linux.
So I need to get the result of the unzip -l command line by line.
I have managed to extract the files names with this command :
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 ;
But i can't figure out how to obtain the file names one after another to execute them.
How can i do it please ?
Thanks a lot.
If all you need the first row in a column, add a pipe and get the first line using head -1
So your one liner will look like :
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 |head -1;
That will give you first line
now, club head and tail to get second line.
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 |head -2 | tail -1;
to get second line.
But from scripting piont of view this is not a good approach. What you need is a loop as below:
for class in `unzip -l el-api.jar | awk '{print $NF}' | grep javax/el/[A-Za-Z] | cut -d "/" -f3`; do echo $class; done;
you can replace echo $class with whatever command you wish - and use $class to get the current class name.
HTH
Here is my attempt, which also take into account Daddou's request to remove the .class extension:
unzip -l package.jar | \
awk -F'/' '/com\/tests\/[A-Za-z]/ {sub(/\.class/, "", $NF); print $NF}' | \
while read baseName
do
echo " $baseName"
done
Notes:
The awk command also handles the tasks of grep and cut
The awk command also handles the removal of the .class extension
The result of the awk command is piped into the while read... command
baseName represents the name of the class file, with the .class extension removed
Now, you can do something with that $baseName

shell script cut and sed help

I have 2 two files
$cat file1.txt
field1=value1
field2=value2
field3=value3
::
::
$cat file2.txt
something.field1.some
otherthing.field2.anything
anything.field3.something
I need to read file1.txt and check whether file2.txt for fieldN and replace with valueN
so that the result will be
something.value1.some
otherthing.value2.anything
anything.value3.something
Provided there are no special sed-type characters in your fields and values, you can use a meta-sed approach:
pax> sed -e 's/^/s\/\\./' -e 's/=/\\.\/./' -e 's/$/.\/g/' file1.txt >x.sed
pax> sed -f x.sed file2.txt
something.value1.some
otherthing.value2.anything
anything.value3.something
If you look at the x.sed file, you'll see that the first sed just makes a list of sed commands to be executed on your second file.
use awk
$ awk -F"[=.]" 'FNR==NR{a[$1]=$2;next}{$2=a[$2]}1' OFS="." file1 file2
something.value1.some
otherthing.value2.anything
anything.value3.something
This unfortunately requires the files to be sorted:
tr = . < file1.txt | join -t . -1 1 -2 2 -o 2.1 1.2 2.3 - file2.txt

Resources