I have a node server that uses environments variables.
I manually add those to my .bashrc file and when the node server runs, it uses them as process.env.something.
Now we have to deploy this service from a Jenkins job.
What is the proper way of setting this up?
If you are talking about setting the environment variable during the run of the Pipeline then you can use Declarative for it.
environment {
PROJECT_NAME = 'Jenkins-Job'
DISABLE_AUTH = 'true'
}
Environment variables can be set globally, like the example below, or per stage.
pipeline {
agent {
label 'linux'
}
environment {
DISABLE_AUTH = 'true'
}
stages {
stage('Build') {
environment {
DB_ENGINE = 'sqlite'
}
steps {
echo "Database engine is ${DB_ENGINE}"
echo "DISABLE_AUTH is ${DISABLE_AUTH}"
sh 'printenv'
}
}
}
}
For more info, please read this
Related
I'm looking to automate on specific part based on a very complicated Terraform script that I have.
To make a bit clear I have created TF template with Deployment of entire infra into Azure with App Services, Storage account, Security groups, Windows based VM's, Linux based VM's split for MongoDB and RabbitMQ. Inside my script I was able to automate deployment to use the name of the application and create Synthetic Test and using a local variable plus based on to the environment to use a specific Datadog Key using the local variable ability
keyTouse = lower(var.environment) != "production" ? var.DatadogNPD : var.DatadogPRD
right now the point that bothers me is the following.
Since we are not in a need to use Synthetic tests based on NON Production environment i would like to use some sort of logic and not deploy Synthetic tests if the var.environment is not "PRODUCTION"
To make this part more interesting i also have the ability to deploy multiple Synthetic Test using the "count" and "length" shown below
inside main.tf
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
}
and for Datadog Synthetic Test
resource "datadog_synthetics_test" "app_service_monitoring" {
count = length(var.webapp_name)
type = "api"
subtype = "http"
request_definition {
method = "GET"
url = "https://${element(var.webapp_name, count.index)}.azurewebsites.net/health"
}
Could you help me and suggest how can I enable or disable modules deployment using the variable based on environment?
Based on my understanding of the question, the change would have to be two-fold:
Add an environment variable to the module code
Use that variable for deciding if the synthetics test resource should be created or not
The above translates to creating another variable in the module and later on providing that variable a value when calling the module. The last part would be deciding based on that if the resource gets created.
# module level variable
variable "environment" {
type = string
description = "Environment in which to deploy resources."
}
Then, in the resource, you would add the following:
resource "datadog_synthetics_test" "app_service_monitoring" {
count = var.environment == "production" ? length(var.webapp_name) : 0
type = "api"
subtype = "http"
request_definition {
method = "GET"
url = "https://${element(var.webapp_name, count.index)}.azurewebsites.net/health"
}
}
And finally, in the root module:
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
environment = var.environment
}
The environment = var.environment line will work if you have also defined the variable environment in the root module. If not you can always set it to a value you want:
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
environment = "dev" # <--- or "production" or any other environment you have
}
Terraform v1.2.8
I have a generic script that executes the passed-in shell script on my AWS remote EC2 instance that I've created also in Terraform.
resource "null_resource" "generic_script" {
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.ssh_key_file)
host = var.ec2_pub_ip
}
provisioner "file" {
source = "../modules/k8s_installer/${var.shell_script}"
destination = "/tmp/${var.shell_script}"
}
provisioner "remote-exec" {
inline = [
"sudo chmod u+x /tmp/${var.shell_script}",
"sudo /tmp/${var.shell_script}"
]
}
}
Now I want to be able to modify it so it runs on
all nodes
this node but not that node
that node but not this node
So I created variables in the variables.tf file
variable "run_on_THIS_node" {
type = boolean
description = "Run script on THIS node"
default = false
}
variable "run_on_THAT_node" {
type = boolean
description = "Run script on THAT node"
default = false
}
How can I put a condition to achieve what I want to do?
resource "null_resource" "generic_script" {
count = ???
...
}
You could use the ternary operator for this. For example, based on the defined variables, the condition would look like:
resource "null_resource" "generic_script" {
count = (var.run_on_THIS_node || var.run_on_THAT_node) ? 1 : length(var.all_nodes) # or var.number_of_nodes
...
}
The piece of the puzzle that is missing is the variable (or a number) that would tell the script to run on all the nodes. It does not have to be with length function, you could define it as a number only. However, this is only a part of the code you would have to add/edit, as there would have to be a way to control the host based on the index. That means that you probably would have to modify var.ec2_pub_ip so that it is a list.
I have main.tf:
terraform {
backend "remote" {
organization = "myorg"
workspaces {
name = "some-workspace-like-so"
}
}
}
Ran terraform init successfully. However if I then run terraform workspace list to see other workspaces in my organization I get the error workspaces not supported. Is this an org setting, a configuration issue, me misunderstanding how the command is supposed to work, something else?
Try using the workspaces block's prefix parameter rather than its name argument to handle the several TFC workplaces.
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "org"
workspaces {
prefix = "my-infra"
}
}
}
Reference: https://github.com/hashicorp/terraform/issues/22802 by saitotqr
I tried to use terraform without any Cloud instance - only for local install cloudflared tunnel using construction:
resource "null_resource" "tunell_install" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "/home/uzer/script/tunnel.sh"
}
}
instead something like:
provider "google" {
project = var.gcp_project_id
}
but after running
$ terraform apply -auto-approve
successfully created /etc/cloudflared/cert.json with content:
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
but as I undestood here must be values instead variables? It's seems that metadata_startup_script from instance.tf only applied to Google instance. How it's possible to change it for using terraform with install CF tunnel locally and running tunnel? Maybe also need to use templatefile but in other .tf file? The curent code block metadata_startup_script:
// This is where we configure the server (aka instance). Variables like web_zone take a terraform variable and provide it to the server so that it can use them as a local variable
metadata_startup_script = templatefile("./server.tpl",
{
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
})
Content of server.tpl file:
# Script to install Cloudflare Tunnel
# cloudflared configuration
cd
# The package for this OS is retrieved
wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo dpkg -i cloudflared-stable-linux-amd64.deb
# A local user directory is first created before we can install the tunnel as a system service
mkdir ~/.cloudflared
touch ~/.cloudflared/cert.json
touch ~/.cloudflared/config.yml
# Another herefile is used to dynamically populate the JSON credentials file
cat > ~/.cloudflared/cert.json << "EOF"
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
EOF
# Same concept with the Ingress Rules the tunnel will use
cat > ~/.cloudflared/config.yml << "EOF"
tunnel: ${tunnel_id}
credentials-file: /etc/cloudflared/cert.json
logfile: /var/log/cloudflared.log
loglevel: info
ingress:
- hostname: ssh.${web_zone}
service: ssh://localhost:22
- hostname: "*"
service: hello-world
EOF
# Now we install the tunnel as a systemd service
sudo cloudflared service install
# The credentials file does not get copied over so we'll do that manually
sudo cp -via ~/.cloudflared/cert.json /etc/cloudflared/
# Now we can start the tunnel
sudo service cloudflared start
In argo.tf exist this code:
data "template_file" "init" {
template = file("server.tpl")
vars = {
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
}
}
If you are asking about how to create the file locally and populate the values, here is an example:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = "webzone"
account = "account"
tunnel_id = "id"
tunnel_name = "name"
secret = "secret"
}
)
filename = "${path.module}/server.sh"
}
For this to work, you would have to assign the real values for all the template variables listed above. From what I see, there are already examples of how to use variables for those values. In other words, instead of hardcoding the values for template variables you could use standard variables:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = var.cloudflare_zone
account = var.cloudflare_account_id
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name
secret = random_id.tunnel_secret.b64_std
}
)
filename = "${path.module}/server.sh"
}
This code will populate all the values and create a server.sh script in the same directory you are running the Terraform code from.
You could complement this code with the null_resource you wanted:
resource "null_resource" "tunnel_install" {
depends_on = [
local_file.cloudflare_tunnel_script,
]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "${path.module}/server.sh"
}
}
I'm trying deploy the java microservices into azure kubernetes using helm charts, My application having few secret like DB username and passwords. I stored my secrets in azure keyvault. Using Azure Key vault plugin and service principal I'm trying to fetch the secrets. Test connection was successfully in the plugin and I can able print my secretes as I mentioned below. But while passing the secretes into helm commands i'm getting an following exception
Error: failed parsing --set data: key "****" has no value
If I'm hardcoding the secretes, it's working.
My jenkins file looks like below
*** Pipeline Code ***
pipeline {
agent any
environment {
DB-USERNAME = credentials('db-username')
DB-PASSWORD = credentials('db-password')
}
stages {
stage('Foo') {
steps {
echo DB-USERNAME
echo DB-USERNAME.substring(0, DB-USERNAME.size() -1) // shows the right secret was loaded
sh 'helm upgrade --install $SERVICE $CHART_NAME --set $DB-USERNAME --set $DB-PASSWORD
}
}
}
}
Anyone please advise me on this
Reference :
https://linuxhelp4u.blogspot.com/2020/04/integrate-jenkins-with-azure-key-vault.html
https://plugins.jenkins.io/azure-keyvault/
Use double quote sh once
if you are using "double quotes", $var in sh "... $var ..." will be
interpreted as Jenkins variable;
if you are using 'single quotes', $var in sh '... $var ...' will be
interpreted as shell variable.
Example
pipeline {
agent any
environment {
DB-USERNAME = credentials('db-username')
DB-PASSWORD = credentials('db-password')
}
stages {
stage('Foo') {
steps {
echo DB-USERNAME
echo DB-USERNAME.substring(0, DB-USERNAME.size() -1) // shows the right secret was loaded
sh "helm upgrade --install $SERVICE $CHART_NAME --set $DB-USERNAME --set $DB-PASSWORD"
}
}
}
}