Azure DSC upload a file from storage to VM? - azure

Im using the below code in a dsc but it never compiles? I always get the message "Stop: The term 'xRemoteFile' is not recognized as the name of a cmdlet, function, script file, or operable program."
I need to copy a file from an azure storage account to a vm via azure DSC.
configuration A_FILE
{
Node localhost
{
File SetupDir {
Type = 'Directory'
DestinationPath = 'C:\files\'
Ensure = "Present"
}
xRemoteFile Afile {
Uri = "https://storageaccountname.file.core.windows.net/files/file1.txt"
DestinationPath = "C:\files\"
DependsOn = "[File]SetupDir"
MatchSource = $false
}
}
}

Ok i worked it out
ok i worked it out
configuration getfilefromazurestorage
{
Import-DscResource -ModuleName xPSDesiredStateConfiguration
Node localhost
{
File SetupDir {
Type = 'Directory'
DestinationPath = 'C:\localfiles'
Ensure = "Present"
}
xRemoteFile remotefile {
# for uri generate a sas token then copy the sas token url in the uri line below
Uri = "https://storageaccountname.blob.core.windows.net/files/sastokendetails"
DestinationPath = "C:\localfiles\AzureFile.txt"
DependsOn = "[File]SetupDir"
MatchSource = $false
}
}

Related

Custom Script extension not Executing on VMSS

I am creating a VMSS using terraform to use for Azure Devops agent pool. I'm able to create VMSS successfully but when I try to run script to enroll it to agent pool, I'm hitting a wall. Nothing seems to work. Here is my TF code:
data "local_file" "template" {
filename = "./agent_install_script.ps1"
}
data "template_file" "script" {
template = data.local_file.template.content
vars = {
agent_name = var.agent_name
pool_name = var.agent_pool_name
token = var.pat_token
user_name = var.vmss_admin_username
logon_password = random_password.vm_password.result
}
}
module "vmss_windows2022g2" {
source = "../modules/vmss_windows"
environment = var.environment
resource_group_name = var.resource_group
vmss_sku = "Standard_DS2_v2"
vmss_nic_subnet_id = module.vnet_mgt.subnet_windows_vmss_id
vmss_nsg_id = module.nsg.vmss_nsg_id
vmss_computer_name = "win2022g2"
vmss_admin_username = var.vmss_admin_username
vmss_admin_password = random_password.vm_password.result
windows_image_id = data.azurerm_image.windows_server2022_gen2.id
vmss_storage_uri = data.azurerm_storage_account.vm_storage.primary_blob_endpoint
overprovision = false
#this will be stored at %SYSTEMDRIVE%\AzureData\CustomData.bin
customData = data.template_file.script.rendered
tags = local.env_tags_map
}
resource "azurerm_virtual_machine_scale_set_extension" "ext" {
name = "InstallDevOpsAgent"
virtual_machine_scale_set_id = module.vmss_windows2022g2.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = jsonencode({
"commandToExecute" = "dir C:\\ > C:\\temp\\test.txt"
#"cd C:\\AzureData; mv .\\CustomData.bin .\\install_agent.ps1; powershell -ExecutionPolicy Unrestricted -File .\\install_agent.ps1; del .\\install_agent.ps1;"
})
#protected_settings = var.protected_settings
failure_suppression_enabled = false
auto_upgrade_minor_version = false
automatic_upgrade_enabled = false
provision_after_extensions = []
timeouts {
create = "1h"
}
}
As you can see, I'm copying the powershell script via custom_data and that is working fine with all the variables substituted properly. I have tried executing simple command dir C:\\ > C:\\temp\\test.txt to see if anything works, but am not getting any output.
TF version 1.12, azurerm provider version 3.32.0
Azure DevOps should install an extension on the scale set (and in turn the VM's) which will automatically enrol the agent without the need for a script.
More details here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops#lifecycle-of-a-scale-set-agent

Using Terraform to load a single file from git repo

We want to load a file from a git repo and place it into a parameter store. The file contains configuration data that is custom to each of several organizational-accounts, which are being constructed with Terraform and are otherwise identical. The data will be stored in AWS SM Parameter Store.
For example the Terraform code to store a string as a parameter is:
resource "aws_ssm_parameter" "parameter_config" {
name = "config_object"
type = "String"
value = "long json string with configuration data"
}
I know there is a file() operator (reference) from Terraform and I know that TF can load files from remote git repos, but I'm not sure if I can bring all this together.
There are a few ways that you can do this.
The first would be to use the github provider with the github_repository_file data source:
terraform {
required_providers {
github = {
source = "integrations/github"
version = "5.12.0"
}
}
}
provider "github" {
token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
owner = "org-name"
}
data "github_repository_file" "config" {
repository = "my-repo"
branch = "master"
file = "config-file"
}
resource "aws_ssm_parameter" "parameter_config" {
name = "config_object"
type = "String"
value = data.github_repository_file.config.content
}
You could also do this with the http provider:
data "http" "config" {
url = "https://raw.githubusercontent.com/my-org/my-repo/master/config-file"
}
resource "aws_ssm_parameter" "parameter_config" {
name = "config_object"
type = "String"
value = data.http.config.response_body
}
Keep in mind that you may get multiline string delimiters when using the http data source method. (e.g.: <<EOT ... EOT)

How to solve the issue of "Error archiving file : Access is denied" while deploying Lambda function using terraform?

I want to deploy an AWS lambda function using Terraform. For that I have created my project directory with the name of lambda terraform, which has the following files and folders inside it -
File hello.js contains the following code -
exports.handler = async(event) => {
console.log("Hello World");
}
Then inside IAM folder, I have created two files named lambda_assume_role_policy.json and lambda_policy.json by following the terraform documentation.
Then I have created another file named iam-lambda.tf which has the following code -
resource "aws_iam_role_policy" "lambda_policy" {
name = "lambda_policy"
role = aws_iam_role.lambda_role.id
policy = file("IAM/lambda_policy.json")
}
resource "aws_iam_role" "lambda_role" {
name = "lambda_role"
assume_role_policy = file("IAM/lambda_assume_role_policy.json")
}
After that I have created lambda.tf and it's code is as follows -
data "archive_file" "hello" {
type = "zip"
source_file = "hello.js"
output_path = "hello.zip"
}
resource "aws_lambda_function" "test_lambda" {
filename = "hello.zip"
function_name = "hello"
role = aws_iam_role.lambda_role.arn
handler = "hello.handler"
runtime = "nodejs12.x"
}
At last I have created the file provider.tf -
provider "aws" {
region = "ap-south-1"
}
I opened the Git bash terminal and navigated to the project directory and ran terraform init, which downloaded all the plugins. Then I ran terraform apply -auto-approve and got the following error -
Please help me out of this situation.
I was coming across the same issue in our CI/CD. I think the reason this is happening is because the created zip does not have the correct permissions. Lambda needs 755 for the zip, but the data block zip that gets created does not have this.
Creating a zip file in the /tmp folder worked for me as the tmp folder has higher permissions of 777 and the created zip from the data "archive_file" will also have 777 permissions.
Hope this helps anyone looking.
locals {
zip_file = "/tmp/some_zip.zip"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "${path.module}/lambda.py"
output_path = local.zip_file
}
resource "aws_lambda_function" "function" {
function_name = "${var.function_name}"
filename = "${data.archive_file.lambda_zip.output_path}"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
In the aws_lambda_function resource, you have to specify output of the archive_file as the filename:
resource "aws_lambda_function" "test_lambda" {
filename = data.archive_file.hello.output_path
hash = data.archive_file.hello.output_base64sha256
}

Terraform - multi-line JSON to single line?

I've created a JSON string via template/interpolation.
I need to pass that to local-exec, which in turn uses a Powershell template to make a CLI call.
Originally I tried just referencing the json template in the Powershell command itself
--cli-input-json file://lfsetup.tpl
.. however, the template does not get interpolated.
Next, I tried setting the json to a local. However, this is multi-line and the CLI does not like that. Maybe if I could convert to single line ?
Any sugestions or guidance welcome !!
Thanks
JSON (.tpl or variable)
{
"CatalogId": "${account_id}",
"DataLakeSettings": {
"DataLakeAdmins": [
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role1"
},
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role2"
}
],
"CreateDatabaseDefaultPermissions": [],
"CreateTableDefaultPermissions": []
}
}
.tf
locals {
assume_role_arn = "arn:aws:iam::${local.account_id}:role/role_to_assume"
lf_json_settings = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
cli_region = "region"
}
resource "null_resource" "settings" {
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", { role_arn = local.assume_role_arn, json_settings = local.lf_json_settings, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
.ps
$ErrorActionPreference = "Stop"
$json = aws sts assume-role --role-arn ${role_arn} --role-session-name sessionname
$accessTokens = ConvertFrom-Json (-join $json)
$env:AWS_ACCESS_KEY_ID = $accessTokens.Credentials.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY = $accessTokens.Credentials.SecretAccessKey
$env:AWS_SESSION_TOKEN = $accessTokens.Credentials.SessionToken
aws lakeformation put-data-lake-settings --cli-input-json file://lfsetup.tpl --region ${region}
$env:AWS_ACCESS_KEY_ID = ""
$env:AWS_SECRET_ACCESS_KEY = ""
$env:AWS_SESSION_TOKEN = ""
Output:
For these I put the template output into a local and passed the local to powershell. Then did variations with/out jsonencde and trying to replace '\n'. Strange results in some cases:
Use file provisioner to create .json file from rendered .tpl file:
locals {
...
settings_json_file = "/tmp/lfsetup.json"
}
resource "null_resource" "settings" {
provisioner "file" {
content = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
destination = local.settings_json_file
}
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", role_arn = local.assume_role_arn, json_settings = local.settings_json_file, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
Update your .ps file
replace file://lfsetup.tpl by file://${json_settings}
aws lakeformation put-data-lake-settings --cli-input-json file://${json_settings} --region ${region}
You may also use jsonencode function

Azure. How to deploy custom file referenced by DSC

I'm building an Azure RM Template that will install DSC on a target VM. DSC must use a .bacpac file. How can I upload that file to a target VM? How can I make it to be downloaded by target VM from GitHub and placed on a specific folder?
The DSC configuration looks like this:
https://github.com/PowerShell/xDatabase/blob/dev/Examples/Sample_xDatabase.ps1
Something like this:
Import-DscResource -ModuleName PSDesiredStateConfiguration,xPSDesiredStateConfiguration,xDatabase
Node $nodeName
{
LocalConfigurationManager
{
RebootNodeIfNeeded = $true
}
xRemoteFile BacPacPackage
{
Uri = "https://github.com/url_to_your_bacpac"
DestinationPath = "c:\where_to_put_it"
MatchSource = $false
}
xDatabase DeployBacPac
{
Ensure = "Present"
SqlServer = $nodeName
SqlServerVersion = $SqlServerVersion
DatabaseName = $DatabaseName
Credentials = $credential # credentials to access SQL
BacPacPath = "c:\path_from_previous command"
DependsOn = "[xRemoteFile]BacPacPackage"
}
}

Resources