I have a question about Puppet manifests.
When i have the following Puppet resource:
apache::vhost {‘homepage’:
…
}
How can i retrieve the 'homepage' string from outside the resource?
Related
I am trying to retrieve the deployed manifest and metadata information using the terraform
"helm_release" resource
Metadata values are coming out nicely however its failing in getting the manifest
Sample code:
`output "show_release_md" {
value=helm_release.testdeploy.metadata
}` -- This works
output "show_release_manifest" { value=helm_release.testdeploy.manifest } - Manifest fails
Unsupported attribute. This object has no argument,nested block or exported attributes named
"manifest"
Any ideas?
You must enable the manifest feature flag in experiments block of the Helm provider first. For reference see the Helm provider experiments block documentation and helm_release attribute reference.
e.g.
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
experiments {
manifest = true
}
}
I want to develop a single Terraform module to deploy my resources, with the resources being stored in separate YAML files. For example:
# resource_group_a.yml
name: "ResourceGroupA"
location: "westus"
# resource_group_b.yml
name: "ResourceGroupB"
location: "norwayeast"
And the following Terraform module:
# deploy/main.tf
variable source_file {
type = string # Path to a YAML file
}
locals {
rg = yamldecode(file(var.source_file))
}
resource "azurerm_resource_group" "rg" {
name = local.rg.name
location = local.rg.location
}
I can deploy the resource groups with:
terraform apply -var="source_file=resource_group_a.yml"
terraform apply -var="source_file=resource_group_b.yml"
But then I run into 2 problems, due to the state that Terraform keeps about my infrastructure:
If I deploy Resource Group A, it deletes Resource Group B and vice-versa.
If I manually remove the .tfstate file prior to running apply, and the resource group already exists, I get an error:
A resource with the ID "/..." already exists - to be managed via Terraform
this resource needs to be imported into the State.
with azurerm_resource_group.rg,
on main.tf line 8 in resource "azurerm_resource_group" "rg"
I can import the resource into my state with
terraform import azurerm_resource_group.reg "/..."
But it's a long file and there may be multiple resources that I need to import.
So my questions are:
How to keep the state separate between the two resource groups?
How to automatically import existing resources when I run terraform apply?
How to keep the state separate between the two resource groups?
I recommend using Terraform Workspaces for this, which will give you separate state files, each with an associated workspace name.
How to automatically import existing resources when I run terraform
apply?
That's not currently possible. There are some third-party tools out there like Terraformer for accomplishing automated imports, but in my experience they don't work very well, or they never support all the resource types you need. Even then they wouldn't import resources automatically every time you run terraform apply.
I'm using Terraform to deploy some dev and prod VMs on our VMware vCenter infrastructure and use vsphere tags to define responsibilities of VMs. I therefore added the following to the (sub)module:
resource "vsphere_tag" "tag" {
name = "SYS-Team"
category_id = "Responsibility"
description = "Systems group"
}
...
resource "vsphere_virtual_machine" "web" {
tags = [vsphere_tag.tag.id]
...
}
Now, when I destroy e.g. the dev infra, it also deletes the prod vsphere tag and leave the VMs without the tag.
I tried to skip the deletion with the lifecycle, but then I would need to separately delete each resource which I don't like.
lifecycle {
prevent_destroy = true
}
Is there a way to add an existing tag without having the resource managed by Terraform? Something hardcoded without having the tag included as a resource like:
resource "vsphere_virtual_machine" "web" {
tags = [{
name = "SYS-Team"
category_id = "Responsibility"
description = "Systems group"
}
]
...
}
You can use Terraform's data sources to refer to things that are either not managed by Terraform or are managed in a different Terraform context when you need to retrieve some output from the resource such as an automatically generated ID.
In this case you could use the vsphere_tag data source to look up the id of the tag:
data "vsphere_tag_category" "responsibility" {
name = "Responsibility"
}
data "vsphere_tag" "sys_team" {
name = "SYS-Team"
category_id = data.vsphere_tag_category.responsibility.id
}
...
resource "vsphere_virtual_machine" "web" {
tags = [data.vsphere_tag.sys_team.id]
...
}
This will use a vSphere tag that has either been created externally or managed by Terraform in another place, allowing you to easily run terraform destroy to destroy the VM but keep the tag.
If I understood correctly the real problem is on:
when I destroy e.g. the dev infra, it also deletes the prod vsphere tag
That should not be happening!
any cross environment deletions is a red flag
It feels like the problem is in your pipeline, not your terraform code ...
The deployment pipelines I create the dev resources are not mixed with prod,
and IF they are mix, you are just asking for troubles,
your team should make redesigning that a high priority
You do ask:
Is there a way to add an existing tag without having the resource managed by Terraform?
Yes you can use PowerCLI for that:
https://blogs.vmware.com/PowerCLI/2014/03/using-vsphere-tags-powercli.html
the command to add tags is really simple:
New-Tag –Name “jsmith” –Category “Owner”
You could even integrate that into terraform code with a null_resource something like:
resource "null_resource" "create_tag" {
provisioner "local-exec" {
when = "create"
command = "New-Tag –Name “jsmith” –Category “Owner”"
interpreter = ["PowerShell"]
}
}
Below is my terraform resource. how can we add project number from variable in terraform gcp resource iam binding because if i will run same terraform for other account, i have to change it manually.
resource "google_project_iam_binding" "project" {
project = var.projectid
role = "roles/container.admin"
members = [
"serviceAccount:service-1016545346555#gcp-sa-cloudbuild.iam.gserviceaccount.com",
]
}
The project number is found in the google_project data-source.
So when this one is added:
data "google_project" "project" {}
it should be accessible using:
data.google_project.project.number
You can use google_client_config data-source to access the configuration of the provider.
First, add the following data-source block to main.tf:
data "google_client_config" "current" {}
Then, you would be able to access the project_id as below:
output "project_id" {
value = data.google_client_config.current.project
}
For more information, please refer to:
https://www.terraform.io/docs/providers/google/d/client_config.html
I am trying to set up an EC2 with elastic IP with terraform. I am trying to use the existing VPC and subnets for the new EC2.
But Terraform is unable to recognise the existing subnet.
I am using the pre existing subnet like this -
variable "subnet_id" {}
data "aws_subnet" "my-subnet" {
id = "${var.subnet_id}"
}
When I run terraform plan I get this error -
Error: InvalidSubnetID.NotFound: The subnet ID 'subnet-02xxxxxxxxxx7' does not exist
status code: 400, request id: c4b6142b-5dfd-458c-959d-e5440b89c9fd
on ec2.tf line 3, in data "aws_subnet" "my-subnet":
3: data "aws_subnet" "my-subnet" {
This subnet was created by terraform in the past. So why does it say it doesn't exist?
Suggested debug:
Create 2 new terraform files:
First file, create a simple subnet (or VPC then subnet whatever)
Second file, try to retreive the subnet id like you posted.
The idea here is not to change anything else, meaning, same region, same creds, same everything.
Possible outputs:
1) you're not able to get the subnet ID - then you should be looking at things like the terraform version, provider version, stuff like that
2) you get the subnet ID, which means something in your creds, region, copy&paste of the name, basically human error is leading this blockade and you should revisit how you're doing things with emphysis on typos and permissions.
We can use the Data-Sources aws_vpc, aws_subnet and aws_subnet_ids:
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "selected" {
vpc_id = data.aws_vpc.default.id
}
data "aws_subnet" "selected" {
for_each = data.aws_subnet_ids.selected.ids
id = each.value
}
And we can use them like in this LB example below:
resource "aws_alb" "alb" {
...
subnets = [for subnet in data.aws_subnet.selected : subnet.id]
...
}
This provides a reference to the VPC and subnets so you can pass the ID to other resources. Terraform does not manage the VPC or subnets when you do this, it simply references them.
It's irrelevant whether the subnet was initially created by Terraform or not.
The data source is attempting to find the subnet in the current state file. Your plan command returns the error because the subnet is not in your state file.
Try importing the subnet and then re-running the plan.
$ terraform import aws_subnet.public_subnet subnet-02xxxxxxxxxx7