i was able to create vm using terraform but…
when i use the context block im facing an issue
Error: Unsupported block type
on terraform.tf line 34, in resource “opennebula_template” “mytemplate”:
34: context {
Blocks of type “context” are not expected here. Did you mean to define
argument “context”? If so, use the equals sign to assign it a value.
I am adding it exactly as the guide shows in formal terraform docs in here
https://registry.terraform.io/providers/OpenNebula/opennebula/latest/docs/resources/virtual_machine
variable "one_endpoint" {}
variable "one_username" {}
variable "one_password" {}
variable "one_flow_endpoint" {}
provider "opennebula" {
endpoint = var.one_endpoint
flow_endpoint = var.one_flow_endpoint
username = var.one_username
password = var.one_password
}
#########################################################################
resource "opennebula_image" "CentOS7-clone" {
clone_from_image = 35
name = "CentOS7-clone"
datastore_id = 1
persistent = false
permissions = "660"
group = "oneadmin"
}
#########################################################################
resource "opennebula_virtual_machine" "demo" {
count = 1
name = "centos7"
cpu = 2
vcpu = 2
memory = 4096
group = "oneadmin"
permissions = "660"
context {
NETWORK = "YES"
HOSTNAME = "$NAME"
START_SCRIPT ="yum upgrade"
}
graphics {
type = "VNC"
listen = "0.0.0.0"
keymap = "fr"
}
os {
arch = "x86_64"
boot = "disk0"
}
disk {
image_id = opennebula_image.CentOS7-clone.id
size = 10000
target = "vda"
driver = "qcow2"
}
nic {
model = "virtio"
network_id = 7
security_groups = [0]
}
vmgroup {
vmgroup_id = 2
role = "vm-group"
}
tags = {
environment = "dev"
}
timeout = 5
}
you need to define the context block with an equal sign like below:
context = {
NETWORK = "YES"
HOSTNAME = "$NAME"
START_SCRIPT ="yum upgrade"
}
Omitting the equal sing for defining attributes was only supported in Terraform <0.12 (Terraform 0.12 Compatibility for Providers - Terraform by HashiCorp). We have an issue for updating the documentation in the GitHub repository.
Related
I have a bunch of disks created in one terraform configuration and I am trying to attach them to instances in another configuration. I'm using OCI, but I think that is irrelevant here. I'm stripping out what I think is irrelevant.
Everything is works fine in the same configuration, but I don't know how to do this in a separate configuration using remote state.
Creating the disks in config_1
resource "oci_core_volume" "data_disks" {
for_each = var.block_volume_params
...
display_name = each.value.name
size_in_gbs = each.value.size_in_gbs
vpus_per_gb = each.value.vpus_per_gb
}
variable "block_volume_params" {
type = map(object({
name = string
size_in_gbs = number
vpus_per_gb = number
}))
default = null
}
block_volume_params = {
instance1 = {
name = "instance1"
size_in_gbs = 100
vpus_per_gb = 20
}
instance2 = {
name = "instance2"
size_in_gbs = 100
vpus_per_gb = 20
}
}
Attaching the disks in config_2
resource "oci_core_volume_attachment" "data_disks" {
for_each = var.block_volume_params
attachment_type = "iSCSI"
instance_id = oci_core_instance.instances[each.value.instance_name].id
volume_id =
data.terraform_remote_state.config_1.outputs.persistent-disks-ocids[*]
}
That last line is obviously wrong, but that is what I am asking about. How can I make the remote state outputs tie with the instance disk names, which have the same names as the disks?
In config_1, what should the output of the ids for the disks be?
output "persistent-disks-ocids" {
value = [for key, value in oci_core_volume.data_disks : value.id]
}
^^ This does not output the instance/disk names.
I'm tring to create a zabbix template with applications defined and trigger.
I can create the template, import my hosts and associate to it.
Now when I try to add the trigger to the template, I receive the error in the object.
this is my
data.tf
data "zabbix_hostgroup" "group" {
name = "Templates"
}
data "zabbix_template" "template" {
for_each = {
common_simple = { name = "Common Simple" }
common_snmp = { name = "Common SNMP" }
class_template = { name = var.class_names[var.class_id] }
}
name = each.value.name
}
data "zabbix_proxy" "proxy" {
for_each = {
for inst in var.instances :
"${inst.instance}.${inst.site}" => inst.site
}
#host = "zabpxy01.${each.value}.mysite.local"
host = "mon-proxy1.${each.value}.mtsite.local"
}
and this is my hosts.tf:
# create host group for specific to service
resource "zabbix_hostgroup" "hostgroup" {
name = var.class_names[var.class_id]
}
# create template
resource "zabbix_template" "template" {
host = var.class_id
name = var.class_names[var.class_id]
description = var.class_names[var.class_id]
groups = [
data.zabbix_hostgroup.group.id
]
}
# create application
resource "zabbix_application" "application" {
hostid = data.zabbix_template.template.id
name = var.class_names[var.class_id]
}
# create snmp disk_total item
resource "zabbix_item_snmp" "disk_total_item" {
hostid = data.zabbix_template.template.id
key = "snmp_disk_root_total"
name = "Disk / total"
valuetype = "unsigned"
delay = "1m"
snmp_oid="HOST-RESOURCES-MIB::hrStorageSize[\"index\", \"HOST-RESOURCES-MIB::hrStorageDescr\", \"/\"]"
depends_on = [
data.zabbix_template.template
]
}
# create snmp disk_used item
resource "zabbix_item_snmp" "disk_used_item" {
hostid = data.zabbix_template.template.id
key = "snmp_disk_root_used"
name = "Disk / used"
valuetype = "unsigned"
delay = "1m"
snmp_oid="HOST-RESOURCES-MIB::hrStorageUsed[\"index\", \"HOST-RESOURCES-MIB::hrStorageDescr\", \"/\"]"
depends_on = [
data.zabbix_template.template
]
}
# create trigger > 75%
resource "zabbix_trigger" "trigger" {
name = "Disk Usage 75%"
expression = "({${data.zabbix_template.template.host}:${zabbix_item_snmp.disk_used_item.key}.last()} / {${data.zabbix_template.template.host}:${zabbix_item_snmp.disk_total_item.key}.last()}) * 100 >= 75"
priority = "warn"
enabled = true
multiple = false
recovery_none = false
manual_close = false
}
# create hosts
resource "zabbix_host" "host" {
for_each = {
for inst in var.instances : "${var.class_id}${format("%02d", inst.instance)}.${inst.site}" => inst
}
host = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["hostname"]
name = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["hostname"]
enabled = false
proxyid = data.zabbix_proxy.proxy["${each.value.instance}.${each.value.site}"].id
groups = [
zabbix_hostgroup.hostgroup.id
]
templates = concat ([
data.zabbix_template.template["common_simple"].id,
data.zabbix_template.template["common_snmp"].id,
zabbix_template.template.id
])
# add SNMP interface
interface {
type = "snmp"
ip = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["mgmt0"]
main = true
port = 161
}
# Add Zabbix Agent interface
interface {
type = "agent"
ip = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["mgmt0"]
main = true
port = 10050
}
macro {
name = "{$INTERFACE_MONITOR}"
value = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["mgmt0"]
}
macro {
name = "{$SNMP_COMMUNITY}"
value = var.ip_addresses[var.class_id][each.value.site][each.value.instance]["snmp"]
}
depends_on = [
zabbix_hostgroup.hostgroup,
data.zabbix_template.template,
data.zabbix_proxy.proxy,
]
}
output "class_template_id" {
value = zabbix_template.template.id
description = "Template ID of created class template for items"
}
When I run "Terraform plan" I receive the error:
Error: Missing resource instance key │ │ on hosts/hosts.tf line 26,
in resource "zabbix_application" "application": │ 26: hostid =
data.zabbix_template.template.id │ │ Because
data.zabbix_template.template has "for_each" set, its attributes must
be accessed on specific instances. │ │ For example, to correlate with
indices of a referring resource, use: │
data.zabbix_template.template[each.key]
Where is my error?
Thanks for the support
UPDATE
I tried to use
output "data_zabbix_template" {
value = data.zabbix_template.template
}
but I don't see any output when I run terraform plan
I tried to modify in:
hostid = data.zabbix_template.template.class_template.id
but I continue to receive the same error:
Error: Missing resource instance key on hosts/hosts.tf line 27, in
resource "zabbix_application" "application": 27: hostid =
data.zabbix_template.template.class_template.id Because
data.zabbix_template.template has "for_each" set, its attributes must
be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
data.zabbix_template.template[each.key]
Error: Unsupported attribute on hosts/hosts.tf line 27, in resource
"zabbix_application" "application": 27: hostid =
data.zabbix_template.template.class_template.id This object has no
argument, nested block, or exported attribute named "class_template".
UPDATE:
My script for each host taht I'll add, set two existing template ("Common Simple" and "Common SNMP") and create a new template as below:
# module.mytemplate-servers_host.zabbix_template.template will be created
+ resource "zabbix_template" "template" {
+ description = "mytemplate-servers"
+ groups = [
+ "1",
]
+ host = "mytemplate-servers"
+ id = (known after apply)
+ name = "mytemplate-servers"
}
Now my scope is to add on this template an application and set two items and one trigger
When you use for_each in a data source or resource, the output of that data source or resource is a map, where the keys in the map are the same as the keys in the for_each and the values are the regular output of that data/resource for the given input value with that key.
Try using:
output "data_zabbix_template" {
value = data.zabbix_template.template
}
And you'll see what I mean. The output will look something like:
data_zabbix_template = {
common_simple = {...}
common_snmp = {...}
class_template = {...}
}
So in order to use this data source (on the line where the error is being thrown), you need to do:
hostid = data.zabbix_template.template.common_simple.id
And replace common_simple in that line with whichever key in the for_each you want to use. You'll need to do this everywhere that you use data.zabbix_template.template.
I have this script which works great. It created 3 instances with the sepcified tags to identify them easily. But issue is i want to add a remote-exec provisioner (currently commented) to the code to install some packages. If i was using count, i could have looped over it to do the remote-exec over all the instances. I could not use count because i had to use for_each to loop over a local list. Since count and for_each cannot be used together, how do i loop over the instances to retrieve their IP addresses for using in the remote-exec provisioner.
On digital ocean and AWS, i was able to get it work using host = "${self.public_ip}"
But it does not work on vultr and gives the Unsupported attribute error
instance.tf
resource "vultr_ssh_key" "kubernetes" {
name = "kubernetes"
ssh_key = file("kubernetes.pub")
}
resource "vultr_instance" "kubernetes_instance" {
for_each = toset(local.expanded_names)
plan = "vc2-1c-2gb"
region = "sgp"
os_id = "387"
label = each.value
tag = each.value
hostname = each.value
enable_ipv6 = true
backups = "disabled"
ddos_protection = false
activation_email = false
ssh_key_ids = [vultr_ssh_key.kubernetes.id]
/* connection {
type = "ssh"
user = "root"
private_key = file("kubernetes")
timeout = "2m"
host = vultr_instance.kubernetes_instance[each.key].ipv4_address
}
provisioner "remote-exec" {
inline = "sudo hostnamectl set-hostname ${each.value}"
} */
}
locals {
expanded_names = flatten([
for name, count in var.host_name : [
for i in range(count) : format("%s-%02d", name, i + 1)
]
])
}
provider.tf
terraform {
required_providers {
vultr = {
source = "vultr/vultr"
version = "2.3.1"
}
}
}
provider "vultr" {
api_key = "***************************"
rate_limit = 700
retry_limit = 3
}
variables.tf
variable "host_name" {
type = map(number)
default = {
"Manager" = 1
"Worker" = 2
}
}
The property you are looking for is called main_ip instead of ip4_address or something like that. Specifically accessible via self.main_ip in your connection block.
i am using terraform and i don't get the right parameters to create my glue jobs.
As i am not a terraform pro (i begin), i wonder how it works.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/glue_job#glue_version
I have not the good updates on my glue job resource using those parameters:
resource "aws_glue_job" "job_name" {
name = "job_name"
description = "job-desc"
role_arn = "${aws_iam_role.service-name.arn}"
max_capacity = 2
max_retries = 1
timeout = 60
command {
script_location = "s3://my_bucket"
python_version = "3"
}
default_arguments = {
"--job-language" = "python"
"--ENV" = "env"
"--spark-event-logs-path" = "s3://my_bucket"
"--job-bookmark-option" = "job-bookmark-enable"
"--glue_version" = "2.0"
"--worker_type" = "G.1X"
"--enable-spark-ui" = "true"
}
execution_property {
max_concurrent_runs = 1
}
}
Idon't know where and how put those params. Could you please help me ?
"--glue_version" = "2.0"
"--worker_type" = "G.1X"
Regards.
The glue_version and worker_type arguments go on the same level as the default_arguments block, not inside of it.
Once you move them out, your resource block may look like this:
resource "aws_glue_job" "job_name" {
name = "job_name"
description = "job-desc"
role_arn = "${aws_iam_role.service-name.arn}"
max_capacity = 2
max_retries = 1
timeout = 60
glue_version = "2.0"
worker_type = "G.1X"
command {
script_location = "s3://my_bucket"
python_version = "3"
}
default_arguments = {
"--job-language" = "python"
"--ENV" = "env"
"--spark-event-logs-path" = "s3://my_bucket"
"--job-bookmark-option" = "job-bookmark-enable"
"--enable-spark-ui" = "true"
}
execution_property {
max_concurrent_runs = 1
}
}
EDIT
The version you are using, 2.30.0 doesn't support these arguments for the aws_glue_job resource.
The glue_version argument was not added until version 2.34.0 of the AWS Provider.
The worker_type argument was not added until version 2.39.0.
You will need to update the provider to support these arguments.
I have a terraform template that creates multiple EC2 instances.
I then create a few Elastic Network interfaces in the AWS console and added them as locals in the terraform template.
Now, I want to map the appropriate ENI to the instance hence I added locals and variables as below.
locals {
instance_ami = {
A = "ami-11111"
B = "ami-22222"
C = "ami-33333"
D = "ami-4444"
}
}
variable "instance_eni" {
description = "Pre created Network Interfaces"
default = [
{
name = "A"
id = "eni-0a15890a6f567f487"
},
{
name = "B"
id = "eni-089a68a526af5775b"
},
{
name = "C"
id = "eni-09ec8ad891c8e9d91"
},
{
name = "D"
id = "eni-0fd5ca23d3af654a9"
}
]
}
resource "aws_instance" "instance" {
for_each = local.instance_ami
ami = each.value
instance_type = var.instance_type
key_name = var.keypair
root_block_device {
delete_on_termination = true
volume_size = 80
volume_type = "gp2"
}
dynamic "network_interface" {
for_each = [for eni in var.instance_eni : {
eni_id = eni.id
}]
content {
device_index = 0
network_interface_id = network_interface.value.eni_id
delete_on_termination = false
}
}
}
I am getting below error:
Error: Error launching source instance: InvalidParameterValue: Each network interface requires a
unique device index.
status code: 400, request id: 4a482753-bddc-4fc3-90f4-2f1c5e2472c7
I think terraform is tyring to attach all 4 ENI's to single instance only.
What should be done to attach ENI's to an individual instance?
The configuration you shared in your question is asking Terraform to manage four instances, each of which has four network interfaces associated with it. That's problematic in two different ways:
All for of the network interfaces on each instance are configured with the same device_index, which is invalid and is what the error message here is reporting.
Even if you were to fix that, it would then try to attach the same four network interfaces to four different EC2 instances, which is invalid: each network interface can be attached to only one instance at a time.
To address that and get the behavior you wanted, you only need one network_interface block, whose content is different for each of the instances:
locals {
instance_ami = {
A = "ami-11111"
B = "ami-22222"
C = "ami-33333"
D = "ami-4444"
}
}
variable "instance_eni" {
description = "Pre created Network Interfaces"
default = [
{
name = "A"
id = "eni-0a15890a6f567f487"
},
{
name = "B"
id = "eni-089a68a526af5775b"
},
{
name = "C"
id = "eni-09ec8ad891c8e9d91"
},
{
name = "D"
id = "eni-0fd5ca23d3af654a9"
}
]
}
locals {
# This expression is transforming the instance_eni
# value into a more convenient shape: a map from
# instance key to network interface id. You could
# also choose to just change directly the
# definition of variable "instance_eni" to already
# be such a map, but I did it this way to preserve
# your module interface as given.
instance_network_interfaces = {
for ni in var.instance_eni : ni.name => ni.id
}
}
resource "aws_instance" "instance" {
for_each = local.instance_ami
ami = each.value
instance_type = var.instance_type
key_name = var.keypair
root_block_device {
delete_on_termination = true
volume_size = 80
volume_type = "gp2"
}
network_interface {
device_index = 0
network_interface_id = local.instance_network_interfaces[each.key]
delete_on_termination = false
}
}
Now each instance has only one network interface, with each one attaching to the corresponding ENI ID given in your input variable. Referring to each.key and each.value is how we can create differences between each of the instances declared when using resource for_each; we don't need any other repetition constructs inside unless we want to create nested repetitions, like having a dynamic number of network interfaces for each instance.