Execute bash command in jenkinsfile groovy - linux

Help please
Here i have a part of Jenkinsfile like
#Library('groovy_shared_libraries')_
stage("Ensure droplet don`t exist and Create DO Droplet") {
// Ensure droplet don`t exist
ExistDroplet = sh(
script: "doctl compute droplet list | awk '{gsub(/\./, "", $2)} 1' | grep -w $(echo $FullDomainName | sed "s/\.//g") | awk '{ print $2 }' | wc -l"
returnStdout: true
).trim()
How i can execute this bash command in jenkinsfile groovy?
doctl compute droplet list | awk '{gsub(/\./, "", $2)} 1' | grep -w $(echo $FullDomainName | sed "s/\.//g") | awk '{ print $2 }' | wc -l
With the current implementation, it returns an error
WorkflowScript: 26: unexpected char: '\' # line 26, column 63.
te droplet list | awk '{gsub(/\./, "", $
if i add additional \ to command i have this error
WorkflowScript: 26: illegal string body character after dollar sign;
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" # line 26, column 21.
script: `"doctl compute droplet list | awk '{gsub(/\\./, "", $2)} 1' | grep -w $(echo $FullDomainName | sed "s/\\.//g") | awk '{ print $2 }' | wc -l",`
after adding escape $2 jenkins show this error
WorkflowScript: 26: illegal string body character after dollar sign;
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" # line 26, column 21.
script: `"doctl compute droplet list | awk '{gsub(/\\./, "", \$2)} 1' | grep -w $(echo $FullDomainName | sed "s/\\.//g") | awk '{ print \$2 }' | wc -l",`
This all works with this command doctl compute droplet list | grep -w \"$FullDomainName\" | awk '{ print \$2 }' | wc -l
but i need to add
awk '{gsub(/\\./, "", $2)} 1' | grep -w $(echo $FullDomainName | sed "s/\\.//g")

\ is the escape character for the shell and Jenkins. If you want to send the backslash character to the shell you need to escape it with another one for Jenkins like so (notice the \\):
script: "doctl compute droplet list | awk '{gsub(/\\./, "", $2)} 1' | grep -w $(echo $FullDomainName | sed "s/\\.//g") | awk '{ print $2 }' | wc -l"

Related

Strip a part of string in linux

Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 is my string and the result I want is vm-1.0.3
What is the best way to do this
Below is what I tried
$ echo Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 | awk -F _ {'print $2'} | awk -F - {'print $1,$2'}
vm 1.0.3
I also tried
$ echo Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 | awk -F _ {'print $2'} | awk -F - {'print $1"-",$2'}
vm- 1.0.3
Here I do not need space in between
I tried using cut and I got the expected result
$ echo Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 | awk -F _ {'print $2'} | cut -c 1-8
vm-1.0.3
What is the best way to do the same?
Making assumptions from the 1 example you provided about what the general form of your input will be so it can handle that robustly, using any sed:
$ echo 'Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2' |
sed 's/^[^-]*-[^-]*-[^_]*_\(.*\)-[^-]*$/\1/'
vm-1.0.3
or any awk:
$ echo 'Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2' |
awk 'sub(/^[^-]+-[^-]+-[^_]+_/,"") && sub(/-[^-]+$/,"")'
vm-1.0.3
You don't need 2 calls to awk, but your syntax with the single quotes outside the curly's, including printing the hyphen:
echo Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 |
awk -F_ '{print $2}' | awk -F- '{print $1 "-" $2}'
If your string has the same format, let the field separator be either - or _
echo Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 | awk -F"[-_]" '{print $4 "-" $5}'
Or split the second field on - and print the first 2 parts
echo Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 | awk -F_ '{
split($2,a,"-")
print a[1] "-" a[2]
}'
Or with gnu-awk a bit more specific match with a capture group:
echo Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2 |
awk 'match($0, /^Apps-[^_]*_(vm-[0-9]+\.[0-9]+\.[0-9]+)/, a) {print a[1]}'
Output
vm-1.0.3
This is the easiest I can think of:
echo "Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2" | cut -c 25-32
Obviously you need to be sure about the location of your characters. In top of that, you seem to be have two separators: '_' and '-', while both characters also are part of the name of your entry.
echo 'Apps-10.00.00R000-B1111_vm-1.0.3-x86_64.qcow2' | sed -E 's/^.*_vm-([0-9]+).([0-9]+).([0-9]+)-.*/vm-\1.\2.\3/'

grep for contents after pattern for word character and comma

echo "this is a test:foo,bar,baz']" | grep -o -E "test:.*" | awk -F: '{ print $2 }'
foo,bar,baz']
I get '] printed at the end, how to print only the word characters and common, nothing else, in this case I need to extract only foo,bar,baz
You can use a single awk for this:
echo "this is a test:foo,bar,baz']" | awk -F 'test:' '{sub(/[^,[:alnum:]].*/, "", $2); print $2}'
foo,bar,baz
Or, you can use a single sed:
echo "this is a test:foo,bar,baz']" | sed 's/.*test://; s/[^,[:alnum:]].*//'
foo,bar,baz
echo "this is a test:foo,bar,baz']"| awk -F: '{sub(/baz../,"baz"); print $2}'
outputs
foo,bar,baz
Using gnu grep pearl regex
$ echo "this is a test:foo,bar,baz']" | grep -oP "(?<=test:)(\w,*)+"
foo,bar,baz

Linux bash -c in command shell script

I am using ssh in a shell script in order to go on multiple linux server and get disk information on a particular disk. I am running following but I am not able to figure out the quote sequencing...In this example I am just capturing the header for my report....
ssh dbadmin#myserver bash -c '"df -kh | grep File | awk '{ print \$1 " | " \$2 " | " \$3 " | " \$4 " | " \$5 }' | tail -n -1"'
and following error...
bash: -c: line 0: syntax error near unexpected token |'
bash: -c: line 0:df -kh | grep File | awk { print | | | | } | tail -n -1'
Any help or suggestions would be great...
Thanks
Better to use quoted here-doc and avoid escaping:
ssh -t -t dbadmin#myserver<<'EOF'
df -kh | awk -v OFS=" | " '/file/{ print $1, $2, $3, $4, $5 }' | tail -n -1
EOF

Use result of pipeline as argument for another command

I'm trying to make part of the output of the first command as another command's argument.
Output of first command is like this, and 3000 is what I want:
XXXXXXXXXXXXX
abcdefg 1020 10:30
[1000] 3000
I extract the pattern by ./command1 | grep '^\[' | awk 'print $2', so it will print out 3000, the value I want.
I'd like to make 3000 as an argument of command2 ./command2 3000. How do I make this work?
command2 $( command1 | awk '/\[/{ print $2 }' )
You can use xargs to pass the input to a new command. In your example you need to include curly braces in your awk argument as well.
./command1 | grep '^\[' | awk '{ print $2 } ' | xargs ./command2
Or more concisely
.command1 | awk '/^\[/ { print $2 }' | xargs ./command2
Example:
echo "[1000] 3000" | awk '/^\[/ { print $2 }' | xargs echo
Output:
3000
There's also sed:
./command1 | sed -n 'n;n;p' | awk '{print $2}'
All together now:
./command2 $(./command1 | sed -n 'n;n;p' | awk '{print $2}') # ./command2 3000
sed will skip 2 lines and print the third.
I would personally try backticks first:
./command2 `./command1 | grep '^\[' | awk 'print $2'`

Linux Grep Probably Simple Answer

I am working with the zone.tab under /usr/share/zoneinfo/zonetab and I am having trouble displaying the data in a certain format.
the command I run:
cat zone.tab | awk '!/#/ {print $3}' | sort
this returns a list of regions and contries:
America/Washington
Pacific/Enderbury
What I need is for the above to return everything after the last / on each line.
There are some cases such as Pacific/Somewhere/A. I have a regex ([^/]+$) that should work but it doesn't. Any ideas?
You can also do it all in a single awk command:
awk '!/^#/ { sub(".*/", "", $3); print $3 }' /usr/share/zoneinfo/zone.tab
---- ----------------- --------
| | |
for non-comment lines | |
| |
modify 3rd col |
leaving only text |
after last slash |
|
Then print modified 3rd col
Pipe the output to sed -e 's;^.*/;;'. For example,
echo -e "America/Washington\nPacific/Enderbury" | sed 's;^.*/;;'
sed s:.*/:: /usr/share/zoneinfo/zone.tab
awk '!/^#/ { print $3;} ' < /usr/share/zoneinfo/zone.tab | awk -F/ ' { print $NF; }'
This regex might work:
# echo -e "a\n\a/b\na/b/c\na/b/c/d\n" |sed 's#^\(\([^/]*/\)*\)\(.*\)#\3#'
a
b
c
d
Perhaps sed -r 's#^(([^/]*/)*)(.*)#\3#' which removes the tangle of backslashes is clearer.

Resources