Update application using Azure Automation DSC - azure

How do I update application using Azure automation DSC?
When I change the configuration and upload and compile the configuration the status of the Vm node goes from Complaint to Pending status.
Then, I have to wait 30 min for the configuration to pickup the new config which then updates the application. I changed the package version too. Is there a way to force trigger the update?
Following is my code:
Configuration Deploy
{
Import-DscResource -ModuleName cWebPackageDeploy
Import-Dscresource -ModuleName PowerShellModule
node "localhost"
{
cWebPackageDeploy depwebpackage
{
Name = "website.zip"
StorageAccount = "testdeploy"
StorageKey = "xxxxxxxxxxxxxxxxxxxxxxx"
Ensure = "Present"
PackageVersion = "1.0"
DeployPath = "C:\Temp\Testdeploy"
DependsOn = "[PSModuleResource]Azure.Storage"
}
PSModuleResource Azure.Storage
{
Ensure = 'present'
Module_Name = 'Azure.Storage'
}
}
}
Deploy

There is no way of doing that using Azure Automation natively.
That being said you can always work around that by telling a vm to pull configuration with Update-DscConfiguration.
You can create a script that uploads the configuration, compiles it and forces a VM to pull from the pull server.

Related

ArgoCD bootstrapping with terraform in Azure Pipeline

I am trying to deploy ArgoCD and applications located in subfolders through Terraform in an AKS cluster.
This is my Folder structure tree:
I'm using app of apps approach, so first I will deploy ArgoCD (this will manage itself as well) and later ArgoCD will let me SYNC the cluster-addons and application manually once installed.
apps
cluster-addons
AKV2K8S
Cert-Manager
Ingress-nginx
application
application-A
argocd
override-values.yaml
Chart
When I run the command "helm install ..." manually in the AKS cluster everything is installed fine.
ArgoCD is installed and later when I access ArgoCD I see that rest of applications are missing and I can sync them manually.
However, If I want to install it through Terraform only ArgoCD is installed but looks like it does not "detect" the override_values.yaml file:
i mean, ArgoCD and ArgoCD application set controller are installed in the cluster but ArgoCD does not "detect" the values.yaml files that are customized for my AKS cluster. If I run "helm install" manually on the cluster everything works but not through Terraform
resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [
"${file("values.yaml")}"
]
values.yaml file is located in the folder where I have the TF code to install argocd and argocd applicationset.
I tried to change the name of the file" values.yaml" to "override_values.yaml" but same issue.
I have many things changed into the override_values.yaml file so I cannot use "set" inside the TF code...
Also, I tried adding:
values = [
"${yamlencode(file("values.yaml"))}"
]
but I get this error in "apply" step in the pipeline:
error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {} "argo-cd:\r\n ## ArgoCD configuration\r\n ## Ref: https://github.com/argoproj/argo-cd\r\n
Probably because is not a JSON file? It does make sense to convert this file into a JSON one?
Any idea if I can pass this override values yaml file through terraform?
If not, please may you post a clear/full example with mock variables on how to do that using Azure pipeline?
Thanks in advance!
The issue was with the values identation in TF code.
The issue was resolved when I resolve that:
resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [file("values.yaml")]
It is working fine also with quoting.

Terraform Vcloud provider is crashing when using terraform plan

I am trying to automate the deployment of VM's in Vcloud using terraform.
The server that I am using doesn't have an internet connection so I had to install terraform and VCD provider offline.
Terrafom init worked but when I use terraform plan is crashing...
Terraform version: 1.0.11
VCD provider version: 3.2.0(I am using this version because we have vcloud 9.7).
This is a testing script, to see if terraform works
terraform {
required_providers {
vcd = {
source = "vmware/vcd"
version = "3.2.0"
}
}
}
provider "vcd" {
user = "test"
password = "test"
url = "https://test/api"
auth_type = "integrated"
vdc = "Org1VDC"
org = "System"
max_retry_timeout = "60"
allow_unverified_ssl = "true"
}
resource "vcd_org_user" "my-org-admin" {
org = "my-org"
name = "my-org-admin"
description = "a new org admin"
role = "Organization Administrator"
password = "change-me"
}
When I run terraform plan I get the following error:
Error: Plugin did not respond
...
The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ConfigureProvider call. The plugin logs may contain more details
Stack trace from the terraform-provider-vcd_v3.2.0 plugin:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xaf3b75]
...
Error: The terraform-provider-vcd_v3.2.0 plugin crashed!
In the logs I can see a lot of DEBUG messages where the provider is trying to connect to github. provider.terraform-provider-vcd_v3.2.0: github.com/vmware/go-vcloud-director/v2/govcd.(*VCDClient).Authenticate(...)
And for ERROR messages I only saw 2:
plugin.(*GRPCProvider).ConfigureProvider: error="rpc error: code = Unavailable desc = transport is closing"
Failed to read plugin lock file .terraform/plugins/linux_amd64/lock.json: open .terraform/plugins/linux_amd64/lock.json: no such file or directory
This is the first time when am I am configuring Terraform offline and am using VCD provider.
Did I miss something?
I have found the issue.
At the URL I was using the IP address of the Vcloud api, and for some reason terraform didn't like that and was causing the crash, after changing to the FQDN, terraform started working again.
Kind regards

How to configure and install nano server using DSC powershell on Windows server 2019

I have Windows Server 2019, where I want to setup Nano Server installation and Docker using DSC powershell scripts.
This requirement is for Azure VM using State Configuration from Azure Automation.
The Script
configuration Myconfig
{
Import-DscResource -ModuleName DockerMsftProvider
{
Ensure = 'present'
Module_Name = 'DockerMsftProvider'
Repository = 'PSGallery'
}
}
I know, I am missing few parameters here.. please help me in completing this script
Similarly, I need it to setup Nano server if possible.

Providing Terraform with credentials in terraform files instead of env variable

I have set-up a terraform project with a remote back-end on GCP. Now when I want to deploy the infrastructure, I run into issues with credentials. I have a credentials file in
\home\mike\.config\gcloud\credentials.json
In my terraform project I have the following data referring to the remote state:
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
}
}
and I specify the cloud provider with a the details of my credentials file.
provider "google" {
version = "~> 1.16"
project = "${data.terraform_remote_state.project_id.project_id}"
region = "${var.region}"
credentials = "${file(var.credentials)}"
}
However, this runs into
data.terraform_remote_state.project_id: data.terraform_remote_state.project_id:
error initializing backend:
storage.NewClient() failed: dialing: google: could not find default
credentials.
if I add
export GOOGLE_APPLICATION_CREDENTIALS=/home/mike/.config/gcloud/credentials.json
I do get it to run as desired. My issue is that I would like to specify the credentials in the terraform files as I am running the terraform commands in an automated way from a python script where I cannot set the environment variables. How can I let terraform know where the credentials are without setting the env variable?
I was facing the same error when trying to run terraform (version 1.1.5) commands in spite of having successfully authenticated via gcloud auth login.
Error message in my case:
Error: storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
It turned out that I had to also authenticate via gcloud auth application-default login and was able to run terraform commands thereafter.
I figured this out in the end.
Also the data needs to have the credentials.
E.g.
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config = {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
credentials = "${var.credentials}" <- added
}
}

terraform-provider-vsphere winrm config reset upon clone customization

Environment
Vsphere 6
VM OS = Win Server 2016
terraform version = 0.11.7
terraform-provider-vsphere version = 1.4.1
Issue / Question
I've noticed that using the customization block will reset the winrm config I had preconfigured on the template.
I've attempted to work around this by configuring winrm on the fly with run_once_command_list, but that seems to operate as fire-and-forget...the provisioner is triggered prior to the command list execution (completion).
Any ideas?
Specific details can be found here ->
terraform-provider-vsphere github issue
For windows 10 you can install the built-in OpenSSH server to transfer a file or use SSH.
provisioner "file" {
source = "BuildAgent1/buildAgent.properties"
destination = "f:\\BuildAgent\\conf\\buildAgent.properties"
connection {
type = "ssh"
user = "user"
password = "password"
timeout = "30m"
}
}

Resources