I'm writing a custom policy in checkov in yaml format. For demo purpose I created a policy which will check the name of storage account and throws error if it has non-alphanumeric characters. My policy file is :
metadata:
name: "Ensure that storage account has no special characters"
category: "convention"
id: "SCV_VARIABLE_01"
definition:
resource_types:
- "azurerm_storage_account"
attribute: "name"
operator: regex_match
value: "^[a-z0-9]{3,24}$"
my variable.tf
variable "storage_account_name"{
type = string
default = "test-12324-$"
}
my main.tf will look like:
resource "azurerm_storage_account" "storage_account" {
name = var.storage_account_name
resource_group_name = var.resource_group_name
location = var.location
}
I created a policy folder and kept my policy-file.yaml only in it. policy folder is in $PWD location. All tf files are also in $PWD location.
If I execute checkov docker command:
docker run -t -v $PWD:/tf bridgecrew/checkov -d /tf --external-checks-dir /tf/policy
checkov internal policy "CKV_AZURE_43" is able to catch that my variable has some special characters and show it as Failed but my custom policy is Passed.
If I directly keep storage account name in main.tf then my custom policy is throwing error and working as expected.
Could you tell me what to specify in my custom policy file to throw error when I pass wrong variable value?
thanks,
Santosh
Ignore my question. Checkov indeed checks the variable values like CKV_AZURE_43. I have misconfigured the custom policy yaml earlier. "cond_type:" is mandatory in custom yaml to give correct output.
Related
I want to develop a single Terraform module to deploy my resources, with the resources being stored in separate YAML files. For example:
# resource_group_a.yml
name: "ResourceGroupA"
location: "westus"
# resource_group_b.yml
name: "ResourceGroupB"
location: "norwayeast"
And the following Terraform module:
# deploy/main.tf
variable source_file {
type = string # Path to a YAML file
}
locals {
rg = yamldecode(file(var.source_file))
}
resource "azurerm_resource_group" "rg" {
name = local.rg.name
location = local.rg.location
}
I can deploy the resource groups with:
terraform apply -var="source_file=resource_group_a.yml"
terraform apply -var="source_file=resource_group_b.yml"
But then I run into 2 problems, due to the state that Terraform keeps about my infrastructure:
If I deploy Resource Group A, it deletes Resource Group B and vice-versa.
If I manually remove the .tfstate file prior to running apply, and the resource group already exists, I get an error:
A resource with the ID "/..." already exists - to be managed via Terraform
this resource needs to be imported into the State.
with azurerm_resource_group.rg,
on main.tf line 8 in resource "azurerm_resource_group" "rg"
I can import the resource into my state with
terraform import azurerm_resource_group.reg "/..."
But it's a long file and there may be multiple resources that I need to import.
So my questions are:
How to keep the state separate between the two resource groups?
How to automatically import existing resources when I run terraform apply?
How to keep the state separate between the two resource groups?
I recommend using Terraform Workspaces for this, which will give you separate state files, each with an associated workspace name.
How to automatically import existing resources when I run terraform
apply?
That's not currently possible. There are some third-party tools out there like Terraformer for accomplishing automated imports, but in my experience they don't work very well, or they never support all the resource types you need. Even then they wouldn't import resources automatically every time you run terraform apply.
Below is my terraform resource. how can we add project number from variable in terraform gcp resource iam binding because if i will run same terraform for other account, i have to change it manually.
resource "google_project_iam_binding" "project" {
project = var.projectid
role = "roles/container.admin"
members = [
"serviceAccount:service-1016545346555#gcp-sa-cloudbuild.iam.gserviceaccount.com",
]
}
The project number is found in the google_project data-source.
So when this one is added:
data "google_project" "project" {}
it should be accessible using:
data.google_project.project.number
You can use google_client_config data-source to access the configuration of the provider.
First, add the following data-source block to main.tf:
data "google_client_config" "current" {}
Then, you would be able to access the project_id as below:
output "project_id" {
value = data.google_client_config.current.project
}
For more information, please refer to:
https://www.terraform.io/docs/providers/google/d/client_config.html
I'm trying to create Azure resources from Azure web site CLI, but strangely it throws the error below after run "terraform apply". It wasn't like this before.
Error: A resource with the ID "/subscriptions/certdab-as441-4a670-bsda-2456437/resourceGroups/commapi" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
on terra.tf line 14, in resource "azurerm_resource_group" "rg1":
14: resource "azurerm_resource_group" "rg1" {
Terraform code is below;
provider "azurerm" {
version = "=2.20.0"
features {}
subscription_id = "c413asdasdasdadsasda1c77"
tenant_id = "da956asdasdadsadsasd25a535"
}
resource "azurerm_resource_group" "rg" {
name = "commapi"
location = "West Europe"
}
resource "azurerm_resource_group" "rg1" {
name = "commapi"
location = "West Europe"
}
Regarding to their web site, if a resource group is alredy created, we should import it. But I don't get it how?
They said I should import like below, should I add this line to my terraform code?
Or should I run this import command for every Azure tool?
terraform import azurerm_resource_group.example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1
So for example if I already created 10 Azure resources before and if I add 11.th tool to my Terraform code, should I run this "import" command for each that 10 resources which already created before? That's so weird.
How can I create these resources?
Edit:
If I try again, throws the error below;
Error: A resource with the ID "/subscriptions/asdd-asde1-4asd-bsda-asasd/resourceGroups/commerceapi/providers
/Microsoft.ApiManagement/service/commapi-apim" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_api_management" for more information.
on terra.tf line 65, in resource "azurerm_api_management" "apm":
65: resource "azurerm_api_management" "apm" {
Thanks!
Sample 2 :
For example when I create the API, it throws the error below;
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
azurerm_api_management.apm: Creating...
Error: A resource with the ID "/subscriptions/c4112313-123-123-123-1c77/resourceGroups/testapi/providers/Microsoft.ApiManagement/service/api-apim" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_api_management" for more information.
on commerceApi.tf line 67, in resource "azurerm_api_management" "apm":
67: resource "azurerm_api_management" "apm" {
After that, when I try to import command, this time it throws the error below;
PS C:\Users\set\Desktop\Terraform\Test_Commerceapi> terraform import azurerm_api_management.apm /subscriptions/c4asdb-234e1-23-234a-23424337/resourceGroups/testapi
azurerm_api_management.apm: Importing from ID "/subscriptions/c4234324-234-234-23-4347/resourceGroups/testapi"...
azurerm_api_management.apm: Import prepared!
Prepared azurerm_api_management for import
azurerm_api_management.apm: Refreshing state... [id=/subscriptions/c4234324-23-4234-234324-77/resourceGroups/testapi]
Error: making Read request on API Management Service "" (Resource Group "testapi"): apimanagement.ServiceClient#Get: Invalid input: autorest/validation: validation failed: parameter=serviceName constraint=MinLength value="" details: value length must be greater than or equal to 1
If you want to bring existing infrastructure under Terraform management, you can use Import.
For example, import an existing resource group.
declare the resource in the .tf file like this.
resource "azurerm_resource_group" "rg" {
}
Then run terraform import azurerm_resource_group.rg /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1 to import the existing resource group.
Then you can edit the Terraform file and add the name and location of the existing resource to the Resource Group. After this, you are ready to use the resource. Read how to Import an Existing Azure Resource in Terraform for more details.
Please note that Terraform import can only import resources into the state. It does not generate configuration. If you just intend to create new resources relying on existing resources, you could use data sources.
Data sources allow data to be fetched or computed for use elsewhere in
Terraform configuration. Use of data sources allows a Terraform
configuration to make use of information defined outside of Terraform,
or defined by another separate Terraform configuration.
For example, use Data Source: azurerm_resource_group to access information about an existing Resource Group. You can create new resources within the existing VNet.
data "azurerm_resource_group" "example" {
name = "existing"
}
output "id" {
value = data.azurerm_resource_group.example.id
}
I cannot seem to retrieve the public ip address output of Terraform for next step in build pipeline in AzureDevops.
Terraform state pull works and outputs to json file, cannot grep on output.
Terraform state show [options] ADDRESS does not support azure backend so cannot use or grep or filter the output
also tried to store as file and read in the value.
resource "local_file" "foo" {
content = "foo!"
filename = "${path.module}/foo.bar"
}
data "azurerm_public_ip" "buildserver-pip" {
name = "${azurerm_public_ip.buildserver-pip.name}"
resource_group_name = "${azurerm_virtual_machine.buildserver.resource_group_name}"
}
output "public_ip_address" {
value = "${data.azurerm_public_ip.buildserver-pip.ip_address}"
}
expect the public ip address to be passed out so can be used in ansible playbooks, bash or python script in next step
Building on #JleruOHeP answer above the following solution will automatically create a variable for every output provided by the terraform script
Create a PowerShell step in your release and insert the following inline PowerShell:
$json = Get-Content $env:jsonPath | Out-String | ConvertFrom-Json
foreach($prop in $json.psobject.properties) {
Write-Host("##vso[task.setvariable variable=$($prop.Name);]$($prop.Value.value)")
}
Make sure you have provided the environment variable jsonPath like this:
For your purpose, I will suggest you store the terraform in Azure storage account. Then you can use the remote state in another terraform file. Here is an example:
Create public IP and store the state in Azure Storage account blob:
terraform {
backend "azurerm" {
storage_account_name = "yourAccountName"
container_name = "yourContainerName"
key = "terraform.tfstate"
}
}
resource "azurerm_public_ip" "main" {
name = "terraform_backend_pip"
location = "East US"
resource_group_name = "yourResourceGroup"
allocation_method = "Static"
}
# this is important, you can get the remote outputs for this
output "public_address" {
value = "${azurerm_public_ip.main.ip_address}"
}
Quote the remote state in another Terraform file:
data "terraform_remote_state" "azure" {
backend = "azurerm"
config = {
storage_account_name = "charlescloudshell"
container_name = "terraform"
key = "terraform.tfstate"
}
}
# the remote state outputs contain all the output that you set in the above file
output "remote_backend" {
value = "${data.terraform_remote_state.azure.outputs.public_address}"
}
The result below:
You can follow the steps about How to store state in Azure Storage here.
Hope it helps. And if you have any more questions, please let me know. If it works for you, please accept it as the answer.
If I understand your question correctly, you wanted to provision something (public ip) with terraform and then have this available for further steps via a variable. All of it in a single Azure DevOps pipeline.
It can be done with a simple output and powershell script (can be inline!):
1) I assume you already use terraform task for the pipeline (https://github.com/microsoft/azure-pipelines-extensions/tree/master/Extensions/Terraform/Src/Tasks/TerraformTaskV1)
2) Another assumption that you have an output variable (from your example - you do)
3) You canspecify the output variable from this task:
4) And finally add a powershell step as the next step with the simplest script and set up its environment variable to be the $(TerraformOutput.jsonOutputVariablesPath)
$json = Get-Content $env:jsonPath | Out-String | ConvertFrom-Json
Write-Host "##vso[task.setvariable variable=MyNewIp]$($json.public_ip_address.value)"
5) ....
6) PROFIT! You have the IP address available as a pipeline variable MyNewIp now!
terraform output variable
1.As shown in above images you will get variables list only if you add "Terraform by Microsoft DevLabs" as task. Other terraform providers don't support output variables.
Terraform by Microsoft DevLabs
2.There is another way to convert terraform output variable to pipeline variable(In this you need to add Terraform CLI as task Terraform CLI by Charles Zipp)
Step1:Terraform code to output storage account access key
Step2:Add terraform output as task after terraform apply
terraform output variable will be prefixed with TF_OUT to convert it to pipeline variable
access pipeline variable by referencing as $(TF_OUT_variable_name)
Configuration directory of terraform output must be same as terraform apply
Step3:Powershell script to access pipeline variable
Reference:
Terraform Output to Pipeline Variables
https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
Terraform v0.11.9
+ provider.aws v1.41.0
I want to know if there is a way to update a resource that is not directly created in the plan but by a resource in the plan. The example is creating a managed Active Directory by using aws_directory_service_directory This process creates a security group and I want to add tags to the security group. Here is the snippet I'm using to create the resource
resource "aws_directory_service_directory" "NewDS" {
name = "${local.DSFQDN}"
password = "${var.ADPassword}"
size = "Large"
type = "MicrosoftAD"
short_name = "${local.DSShortName}"
vpc_settings {
vpc_id = "${aws_vpc.this.id}"
subnet_ids = ["${aws_subnet.private.0.id}",
"${aws_subnet.private.1.id}",
]
}
tags = "${merge(var.tags, var.ds_tags, map("Name", format("%s", local.VPCname)))}"
}
I can reference the newly created security group using
"${aws_directory_service_directory.NewDS.security_group_id}"
I can't use that to update the resource. I want to add all of the tags I have on the directory to the security, as well as updating the Name tag. I've tried using a local-exec provisioner, but the results have not been consistent and getting the map of tags to the command without hard coding it has not worked.
Thanks
I moved the local provider out of the directory service resource and into a dummy resource.
resource "null_resource" "ManagedADTags"
{
provisioner "local-exec"
{
command = "aws --profile ${var.profile} --region ${var.region} ec2 create-tags --
resources ${aws_directory_service_directory.NewDS.security_group_id} --tags
Key=Name,Value=${format("${local.security_group_prefix}-%s","ManagedAD")}"
}
}
(The command = is a single line)
Using the format command allowed me to send the entire list of tags to the resource. Terraform doesn't "manage" it, but it does allow me to update it as part of the plan.
You can then leverage the aws_ec2_tag resource, which works on non-ec2 resources as well, on conjunction with the provider attribute ignore_tags. Please refer to another answer I made on the topic for more detail.
aws already exposes api for that where you can tag resources not just a resource. not sure why terraform is not implementing that
Just hit this as well. Turns out the tags propagate from the directory service. So if you tag your directory appropriately, the name tag from your directory service will be applied to the security group.