Extract number from first line of a file in linux - linux

I have a file which has contents like the below
SPEC.2.ATTRID=REVISION&
SPEC.2.VALUE=5&
SPEC.3.ATTRID=NUM&
SPEC.3.VALUE=VS&
I am using the below command to extract only the numbers from the first line. Is this way efficient or you guys think of an alternate way ?
cat ticketspecdata | tr -d " " | tr -s "[:alpha:]" "~" | tr -d "[=.=]" | cut -d "~" -f2

Using grep :
$ grep -om1 '[0-9]\+' file
2

Or
head -n1 file | tr -cd '[:digit:]'
You may also want to read about UUOC:
http://porkmail.org/era/unix/award.html

Related

du -s | cut -d' ' -f2 -- cut having no effect

I'm trying to get the size of a specified file only. Normally I have zero issues doing this, but no matter what I try here the filename does not go away.
[root#dockertest Shipper]# du -s c_parser.py | cut -d ' ' -f 2
8 c_parser.py
Nothing changes based on what I put after the pipe. Changing '2' to '1' doesn't do anything. Using:
[root#dockertest Shipper]# du -s c_parser.py | awk -F="c_parser.py" '{ print $1 }'
Does nothing at all either. Any thoughts?
The output of du is tab-separated, you need to use tab delimiter. Though tab is the default delimiter in cut, you could also use it explicitly
du -s file | cut -d $'\t' -f2
or just
du -s file | cut -f2
In such cases, do a hexdump of the output will help you understand easily
du -s file | hexdump -c
0000000 8 \t f i l e \n
0000007
Using awk too on tab-delimiter
du -s file | awk 'BEGIN{FS=OFS="\t"}{print $2}'

Looping a linux command with input and multiple pipes

This command works, but I want it run it on every document (input.txt) in every subdirectory.
tr -d '\n' < input.txt | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> SingleOutput.txt
The code takes the file input and divides it into sentences with new lines. Then it finds all the sentences that contain a “[“ and outputs the sentences to a single file.
I tried several looping techniques with find and for loops, but couldn't get it to run in this example. I tried
for dir in ./*; do
(cd "$dir" && tr -d '\n' < $dir | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> /home/dan/SingleOutput.txt);
done;
and also
find ./ -execdir tr -d '\n' < . | awk '{gsub(/\. /,".\n");print}' | grep "\[" >> /home/dan/SingleOutput.txt;
but they didn't work execute just giving me > marks. any ideas?
Try this:
cd $dir
find ./ | grep "input.txt$" | while read file; do tr -d '\n' < $file | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> SingleOutput.txt; done
This will find all files called input.txt under $dir, the it will perform what you say it's already working send output to $dir/SingleOutput.txt.
Why not just something like this?
tr -d '\n' < */input.txt | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> SingleOutput.txt
Or are you interested in keeping the output for each input.txt separate?

Display certain line of a file and then translate to uppercase in linux

Trying to display the 2nd line of a file and then translating it to uppercase.
tried head 2 file | tr [a-z] [A-Z].
sed -n '2{p;q;}' file.txt | tr '[:lower:]' '[:upper:]'
or
awk 'NR==2{print toupper($0);exit}' file.txt
or
head -n2 file.txt | tail -n1 | tr '[:lower:]' '[:upper:]'
The form [:lower:] [:upper:] is the recommended way (POSIX classes)
Try:
head -2 file | tail -1 | tr "[a-z]" "[A-Z]"

Shell - recording the output of the 'od' command to a variable

I read on a forum that to find a random string, you should use the following syntax:
od -a -A n /dev/urandom | head -30 | tr -d ' ' | tr -d '\n' | awk '{print substr($0,1,256)}'
how could I put this output to the variable 'var' instead of displaying it on the screen?
Use backticks or $(), i.e.
var=`command`
or
var=$(command)
Capture it with backticks: ``
VAR=`od -a -A n /dev/urandom | head -30 | tr -d ' ' | tr -d '\n' | awk '{print substr($0,1,256)}'`

Get N line from unzip -l

I have a jar file, i need to execute the files in it in Linux.
So I need to get the result of the unzip -l command line by line.
I have managed to extract the files names with this command :
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 ;
But i can't figure out how to obtain the file names one after another to execute them.
How can i do it please ?
Thanks a lot.
If all you need the first row in a column, add a pipe and get the first line using head -1
So your one liner will look like :
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 |head -1;
That will give you first line
now, club head and tail to get second line.
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 |head -2 | tail -1;
to get second line.
But from scripting piont of view this is not a good approach. What you need is a loop as below:
for class in `unzip -l el-api.jar | awk '{print $NF}' | grep javax/el/[A-Za-Z] | cut -d "/" -f3`; do echo $class; done;
you can replace echo $class with whatever command you wish - and use $class to get the current class name.
HTH
Here is my attempt, which also take into account Daddou's request to remove the .class extension:
unzip -l package.jar | \
awk -F'/' '/com\/tests\/[A-Za-z]/ {sub(/\.class/, "", $NF); print $NF}' | \
while read baseName
do
echo " $baseName"
done
Notes:
The awk command also handles the tasks of grep and cut
The awk command also handles the removal of the .class extension
The result of the awk command is piped into the while read... command
baseName represents the name of the class file, with the .class extension removed
Now, you can do something with that $baseName

Resources