Trying to add sql dbs created from azure portal to azure failover group.
Terraform block will call bash script :
data "external" "database_names" {
program = ["sh", "${path.module}/scripts/fetch_db_id.sh"]
query = {
db_rg = azurerm_resource_group.mssql.name
server_name = azurerm_mssql_server.mssqlserver.name
}
}
fetch_db_id.sh bash script:
#!/usr/bin/env bash
# This script will get the database names at runtime.
eval "$(jq -r '#sh "export DB_RG=\(.db_rg) SERVER_NAME=\(.server_name)"')"
if [[ -z $DB_RG || -z $SERVER_NAME ]]; then
echo "Required variables DB_RG & SERVER_NAME not set" 1>&2
exit 1
fi
db_id=$(az sql db list --resource-group $DB_RG --server $SERVER_NAME --query [*].id | grep -v master 2>/dev/null)
jq -n --arg db_id "$db_id" '{"db_id":$db_id}'
unset DB_RG SERVER_NAME NODE_RG db_id
exit 0
Bash script output : Ran it locally on a linux VM without terraform :
"db_id": "[\n "/subscriptions/my_subscription_id/resourceGroups/sql_rg/providers/Microsoft.Sql/servers/sql_server_name/databases/databaseprd-db1",\n "/subscriptions/my_subscription_id/resourceGroups/sql_rg/providers/Microsoft.Sql/servers/sql_server_name/databases/databaseprd-db2",\n "/subscriptions/my_subscription_id/resourceGroups/sql_rg/providers/Microsoft.Sql/servers/sql_server_name/databases/databaseprd-db3"\n]"
}
Terraform resource block to add database inside failover group:
resource "azurerm_sql_failover_group" "mssql_failover" {
count = (var.enable_read_replica && var.environment == "prd") ? 0 : 1
name = var.mssql_failover_group
resource_group_name = azurerm_resource_group.mssql.name
server_name = azurerm_mssql_server.mssqlserver.name
databases = toset(jsondecode(data.external.database_names.result["db_id"]))
partner_servers {
id = azurerm_mssql_server.replica[0].id
}
read_write_endpoint_failover_policy {
mode = "Automatic"
grace_minutes = 60
}
depends_on = [
azurerm_mssql_server.replica
]
}
terraform error code: when executed via terraform jenkins pipeline
[1mdata.external.database_names.result["db_id"][0m is "[\n "/subscriptions/my_subscription_id/resourceGroups/sql_rg/providers/Microsoft.Sql/servers/sql_server_name/databases/databaseprd-db1",\n "/subscriptions/my_subscription_id/resourceGroups/sql_rg/providers/Microsoft.Sql/servers/sql_server_name/databases/databaseprd-db2",\n "/subscriptions/my_subscription_id/resourceGroups/sql_rg/providers/Microsoft.Sql/servers/sql_server_name/databases/databaseprd-db3",\n]"
Call to function "jsondecode" failed: invalid character ']' looking for
beginning of value
Note: it is introducing an extra "," when we run it with terraform jenkins pipeline, which could lead to json error.
I don't think there's enough information here to be certain, but I can give a partial, speculative answer.
az sql db list --resource-group $DB_RG --server $SERVER_NAME --query [*].id | grep -v master 2>/dev/null
This looks suspicious: the az command outputs JSON, but you're filtering it with grep. What does the output from az look like here? Do you expect the result to be valid JSON?
You say that our output has a comma that you don't expect. This is what you'd expect to see if the az command spat out something like:
[
"/blah/blah/databaseprd-db1",
"/blah/blah/databaseprd-db2",
"/blah/blah/databaseprd-db3",
"/blah/blah/master"
]
The grep -v master would remove the line containing the term "master", leaving you with invalid JSON:
[
"/blah/blah/databaseprd-db1",
"/blah/blah/databaseprd-db2",
"/blah/blah/databaseprd-db3",
]
If you want to use jq, you could replace the grep with something like
jq 'map(select(index("master")|not))'
Related
I have the following Jenkinsfile, executing in a Linux container under Kubernetes. My Jenkins server is version 2.263.4 running on Windows 2012 R2. Any variable I define in my environments section shows up in the sh action with a newline at the end:
pipeline {
agent {
kubernetes {
label UUID.randomUUID().toString()
yaml """
# ..snip...
"""
}
}
environment {
VAR1 = 'VALUE 1'
VAR2 = 'VALUE 2'
}
stages {
stage('One') {
steps {
container('docker') {
sh 'echo -n "$PATH"'
sh 'echo -n "$VAR1"'
sh 'echo -n "$VAR2"'
}
}
}
}
}
Which results in this output:
[Pipeline] sh
+ echo -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[Pipeline] sh
+ echo -n 'VALUE 1
'
VALUE 1
[Pipeline] sh
+ echo -n 'VALUE 2
'
VALUE 2
As you can see, the PATH environment variable has no newline in the shell command, but the two variables from the Jenkinsfile have newlines at the end of their command and are surrounded by single quotes, even though they are surrounded by double-quotes in my sh command.
The problem happens when I use these values as parameters to other commands. For example
sh 'git clone -b $BRANCH $REMOTE source'
Ends up running this command:
+ git clone -b 'BranchValue
' 'RemoteValue
+ ' source
How do I get my environment variables to not have newlines at the end of their values?
I had same issue even without docker with such a simple Jenkinsfile
pipeline {
agent any
stages {
stage('Demo') {
environment {
POM_VERSION = sh(script: 'echo "2.5-SNAPSHOT"', returnStdout: true)
}
steps {
echo "POM_VERSION '${POM_VERSION}'"
}
}
}
}
The sh command indeed adds new line at the end of shell script's output. Jenkins console log looks like this (see ending quote ' at the beginning of new line):
[Pipeline] echo
POM_VERSION '2.5-SNAPSHOT
'
During troubleshooting I came across article How to strip some form of new line character at end of parsed Jenkinsfile variable.
It turned out that is was enough to call trim() on sh's returned value like this:
POM_VERSION = sh(script: 'echo "2.5-SNAPSHOT"', returnStdout: true).trim()
Now the value of variable is the expected one
[Pipeline] echo
POM_VERSION '2.5-SNAPSHOT'
Jenkins version: 2.303.2
I have a long running job that cannot finish within lambda 15 minutes limit. So, I decided to use an EC2 worker instance to run the job. The job needs to be kicked off from a lambda function. I am using the following Python code to send the command to EC2 instance.
ssm.send_command(
InstanceIds=['*****'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': [f'/home/ssm-user/get_cert_attributes.sh --doc_id={doc_id}']})
Shell script is getting called. However, I am unable to parse the args --doc_id. I am using the below code block to parse the arg. doc_id is coming blank. Any help in this regard would be highly appreciated.
#!/bin/bash
while [ "${1:-}" != "" ]; do
case "$1" in
"-d" | "--doc_id")
shift
doc_id=$1
;;
esac
shift
done
echo $doc_id
I resolved the issue by creating my own ssm document:
schemaVersion: "2.2"
description: "Get document attributes"
parameters:
docid:
type: "String"
description: "Document id to be processed"
mainSteps:
- action: "aws:runShellScript"
name: "GetDocAttr"
inputs:
runCommand:
- "/home/ssm-user/get_cert_attributes.sh --doc_id {{docid}}"
On the shell script side, I had to use export doc_id to export the environment variable to use it in subsequent child sessions.
#!/bin/bash
while [ "${1:-}" != "" ]; do
case "$1" in
"-d" | "--doc_id")
shift
doc_id=$1
;;
esac
shift
done
echo $doc_id
export doc_id
I'm able to run a shell script with arguments like this. You don't need all the messaging and other information, but it's there just in case.
commands = ['sudo -u ec2-user ./p1_consolidate.sh 0 1 0']
instanceid = 'i-0780a999exxxdxxx'
ssmc = boto3.client('ssm')
response_send = ssmc.send_command(
DocumentName="AWS-RunShellScript",
Parameters={'commands': commands,
'workingDirectory': ['/home/ec2-user'],
'executionTimeout': ['14400']},
OutputS3BucketName='xxxxx-data-files-for-functions',
OutputS3KeyPrefix='ssm-outputfiles-automation/',
InstanceIds=[instanceid],
ServiceRoleArn='arn:aws:iam::xxxxx6657583:role/SNS-Publish-SSM-Statuses',
NotificationConfig={
'NotificationArn': 'arn:aws:sns:us-east-1:xxxxx6657583:your-sns',
'NotificationEvents': ['All'],
'NotificationType': 'Command'}
)
p1_consolidate.sh stored in /home/ec2-user/ directory
Just takes the three arguments sent via commands below. Then the python file runs with those arguments.
s=$1
e=$2
q=$3
nohup python /home/ec2-user/code/mypythonfile.py -s $s -e $e -q $q &
Question: To add a string before and after, if the specified string is matched between patterns using sed in bash ??
In the below code, I want to add /* one line above object Host "kali" { and add */ to the next line after the occurrence of } (not to the last occurrence of }).
This is my code
object Host "linux" {
import "windows"
address = "linux"
groups = ["linux"]
}
object Host "kali" {
import "linux"
address = "linux"
groups = [linux ]
}
object Host "windows" {
import "linux"
address = "linux"
groups = ["windows" ]
}
This is the expected output:
object Host "linux" {
import "windows"
address = "linux"
groups = ["linux"]
}
/*
object Host "kali" {
import "linux"
address = "linux"
groups = [linux ]
}
*/
object Host "windows" {
import "linux"
address = "linux"
groups = ["windows" ]
}
**This is what I tried**
#! /bin/bash
NONE='\033[00m'
RED='\033[01;31m'
GREEN='\033[0;32m'
clear
echo -e "Enter the names to comment in config file"
cat > comment-file.txt
clear
echo -e "#################################################################################"
echo "Please wait. The names will be commented shortly............"
echo -e "#################################################################################"
echo "Dont press any button, Please hold on...."
while read -r names
do
loc=$(grep -il "object.*Host.*\"$names.*\"" /home/jo/folders/test-sc/*.txt)
if [ -z $loc ]
then
echo -e " $names$RED No Object definition found $NONE "
else
sed -i '/object Host \"$names.*\" {/ { s,^,/*\n,
: loop
/}/ {
s,$,\n*/,
p
d
}
N
b loop
}' "$loc"
echo -e " $names - $loc - $GREEN Object host defenition commented $NONE "
fi
done < comment-file.txt
echo -e "#################################################################################"
echo -e "\t\t\t\t Script completed \t\t\t\t"
echo -e "#################################################################################"
rm -rf comment-file.txt
Error:
No changes had been made in the output file which means /home/jo/folders/test-sc/*.txt
This might work for you (GNU sed):
sed -e '/object Host "kali"/{i\/*' -e ':a;n;/}/!ba;a\*/' -e '}' file
Look for a line containing object Host "kali" insert a line before it containing /*, read/print further lines until one containing } and append the line */.
After finding object Host "kali" {:
Prepend /*\n to the pattern space
If } is seen, append \n*/ to the pattern space, print and delete
Append Next line to the pattern space and loop back to step 2.
sed -e '/object Host "kali" {/ {
s,^,/*\n,
: loop
/}/ {
s,$,\n*/,
p
d
}
N
b loop
}'
.... addendum... To properly pass "$names" to be a part of the sed script, we will need to follow the quoting rules for sh ... the idea will be to embedding "$names" into the sed script, and the sed line will look like the following:
sed -i -e "/object Host \"$names\" {/ {
s,^,/*\n,
: loop
/}/ {
s,$,\n*/,
p
d
}
N
b loop
}" "$loc"
I'm going to parsing a token value from other .tf file into other .tf file
I have tried to understand this link and also from this article
data.tf
data "external" "get_token" {
program = ["/bin/sh", "${path.module}/get-token.sh"]
}
get-token.sh
#!/bin/bash
token=$(kubectl -n kube-system exec [POD_NAME] cat /var/lib/kube-proxy/kubeconfig 2>/dev/null | grep token | awk '{print $2}'
proxy.tf
...
metadata_startup_script = <<-EOT
- name: kube-proxy
user:
token: ${lookup(data.external.get_token.result, "token")}
certificate-authority-data: ${google_container_cluster.new_container_cluster.master_auth.0.cluster_ca_certificate}
...
EOT
My expectation is
token has the value as same as with certificate-authority-data.
certificate-authority-data has a exact value like i expect but the token is nil or blank.
I have run my get-token.sh manually and it's good. But when terraform want to parse it, the value is not parsed successfully. I have added ' before and after the variable ${lookup(data.external.get_token.result, "token")}. Seems not to work.
https://www.terraform.io/docs/providers/external/data_source.html
The program must then produce a valid JSON object on stdout, which
will be used to populate the result attribute exported to the rest of
the Terraform configuration. This JSON object must again have all of
its values as strings. On successful completion it must exit with
status zero.
So the script should return a json object.
#!/bin/bash
...
# add below line for make a json result
jq -n --arg token "$token" '{"token":$token}'
or if there is no jq,
#!/bin/bash
...
#add below
echo -n "{\"token\":\"${token}\"}"
I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).