Does anyone able to provision aurora 5.7 with terraform? I am stuck on the following error. any idea why?
* aws_rds_cluster.default: InvalidParameterCombination:
The Parameter Group test-aurora-57-cluster-parameter-group with DBParameterGroupFamily
aurora-mysql5.7 cannot be used for this instance. Please use a Parameter Group with DBParameterGroupFamily oscar5.6
status code: 400, request id: 09b5d660-1d71-49bf-a5de-a62b87805038
Here's the cluster and cluster instance configurations:
resource "aws_rds_cluster_instance" "cluster_instance" {
#...
db_parameter_group_name = "${aws_db_parameter_group.aurora_db_57_parameter_group.id}"
}
resource "aws_rds_cluster" "default" {
#...
db_cluster_parameter_group_name = "${aws_rds_cluster_parameter_group.aurora_57_cluster_parameter_group.id}"
}
resource "aws_db_parameter_group" "aurora_db_57_parameter_group" {
name = "test-aurora-db-57-parameter-group"
family = "aurora-mysql5.7"
description = "test-aurora-db-57-parameter-group"
}
resource "aws_rds_cluster_parameter_group" "aurora_57_cluster_parameter_group" {
name = "test-aurora-57-cluster-parameter-group"
family = "aurora-mysql5.7"
description = "test-aurora-57-cluster-parameter-group"
}
You should specify engine version for the cluster and instance section.
resource "aws_rds_cluster" "aurora" {
engine = "aurora-mysql"
...
}
resource "aws_rds_cluster_instance" "aurora_instance" {
engine = "aurora-mysql"
...
}
Related
I’m trying to create data proc cluster in GCP using terraform resource google_dataproc_cluster. I would like to create Component gateway along with that. Upon seeing the documentation, it has been stated as to use the below snippet for creation:
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
Upon running the terraform plan, i see the error as " Error: Unsupported block type". And also tried using the override_properties and in the GCP data proc, i could see that the property is enabled, but still the Gateway Component is disabled. Wanted to understand, is there an issue upon calling the one given in the Terraform documentation and also is there an alternate for me to use it what?
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true"
}
}
The below is the error while running the terraform apply.
Error: Unsupported block type
on main.tf line 35, in resource "google_dataproc_cluster" "dataproc_cluster":
35: endpoint_config {
Blocks of type "endpoint_config" are not expected here.
RESOURCE BLOCK:
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "${var.cluster_name}"
region = "${var.region}"
graceful_decommission_timeout = "120s"
labels = "${var.labels}"
cluster_config {
staging_bucket = "${var.staging_bucket}"
/*endpoint_config {
enable_http_port_access = "true"
}*/
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true" /* Has Been Added as part of Component Gateway Enabled which is already enabled in the endpoint_config*/
}
}
gce_cluster_config {
// network = "${var.network}"
subnetwork = "${var.subnetwork}"
zone = "${var.zone}"
//internal_ip_only = true
tags = "${var.network_tags}"
service_account_scopes = [
"cloud-platform"
]
}
master_config {
num_instances = "${var.master_num_instances}"
machine_type = "${var.master_machine_type}"
disk_config {
boot_disk_type = "${var.master_boot_disk_type}"
boot_disk_size_gb = "${var.master_boot_disk_size_gb}"
num_local_ssds = "${var.master_num_local_ssds}"
}
}
}
depends_on = [google_storage_bucket.dataproc_cluster_storage_bucket]
timeouts {
create = "30m"
delete = "30m"
}
}
Below is the snippet that worked for me to enable component gateway in GCP
provider "google-beta" {
project = "project_id"
}
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "clustername"
provider = google-beta
region = us-east1
graceful_decommission_timeout = "120s"
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
This issue is discussed in this Git thread.
You can enable the component gateways in Cloud Dataproc by using google-beta provider in the Dataproc cluster and root configuration of terraform.
sample configuration:
# Terraform configuration goes here
provider "google-beta" {
project = "my-project"
}
resource "google_dataproc_cluster" "mycluster" {
provider = "google-beta"
name = "mycluster"
region = "us-central1"
graceful_decommission_timeout = "120s"
labels = {
foo = "bar"
}
...
...
}
I'm trying to setup Azure Kubernetes Services with Terraform with the 'Azure Voting'-app.
I'm using the code mentioned below, however I keep getting the error on the Load Balancer: "Internal Server Error". Any idea what is going wrong here?
Seems like the Load Balancer to Endpoint (POD) is configured correclt,y thus not sure what is missing here.
main.tf
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "aks" {
name = "kubernetescluster"
resource_group_name = "myResourceGroup"
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config[0].host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
resource "kubernetes_namespace" "azurevote" {
metadata {
annotations = {
name = "azurevote-annotation"
}
labels = {
mylabel = "azurevote-value"
}
name = "azurevote"
}
}
resource "kubernetes_service" "example" {
metadata {
name = "terraform-example"
}
spec {
selector = {
app = kubernetes_pod.example.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "kubernetes_pod" "example" {
metadata {
name = "terraform-example"
labels = {
app = "azure-vote-front"
}
}
spec {
container {
image = "mcr.microsoft.com/azuredocs/azure-vote-front:v1"
name = "example"
}
}
}
variables.tf
variable "prefix" {
type = string
default = "ab"
description = "A prefix used for all resources in this example"
}
It seems that your infrastructure setup is ok, the only thing is the application itself, you create only the front app, and you need to create the backend app to.
You can see the deployment examples here.
You also can see here the exception when you run the frontend without the backend.
As a very first step of my release process I run the following terraform code
resource "azurerm_automation_account" "automation_account" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "${local.automation_account_prefix}-${each.key}"
location = each.key
resource_group_name = each.value.name
sku_name = "Basic"
tags = {
environment = "development"
}
}
The automation accounts created as expected and I can see those in Azure portal.
I also have terraform code that creates a couple of windows VMs,each VM creation accompained by the following
resource "azurerm_virtual_machine_extension" "dsc" {
name = "DevOpsDSC"
virtual_machine_id = var.vm_id
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.83"
settings = <<SETTINGS_JSON
{
"configurationArguments": {
"RegistrationUrl": "${var.dsc_server_endpoint}",
"NodeConfigurationName": "${var.dsc_config}",
"ConfigurationMode": "${var.dsc_mode}",
"ConfigurationModeFrequencyMins": 15,
"RefreshFrequencyMins": 30,
"RebootNodeIfNeeded": false,
"ActionAfterReboot": "continueConfiguration",
"AllowModuleOverwrite": true
}
}
SETTINGS_JSON
protected_settings = <<PROTECTED_SETTINGS_JSON
{
"configurationArguments": {
"RegistrationKey": {
"UserName": "PLACEHOLDER_DONOTUSE",
"Password": "${var.dsc_primary_access_key}"
}
}
}
PROTECTED_SETTINGS_JSON
}
The result is the following
So VM extension is created for each VM and the status says that provisioning succeeded.
For the next step I run the following terraform code
resource "azurerm_automation_dsc_configuration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
location = each.key
content_embedded = "configuration iswebserver {}"
}
resource "azurerm_automation_dsc_nodeconfiguration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver.localhost"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
depends_on = [azurerm_automation_dsc_configuration.iswebserver]
content_embedded = file("${path.cwd}/iswebserver.mof")
}
The mof file content is the following
/*
#TargetNode='IsWebServer'
#GeneratedBy=P120bd0
#GenerationDate=02/25/2021 17:33:16
#GenerationHost=D-MJ05UA54
*/
instance of MSFT_RoleResource as $MSFT_RoleResource1ref
{
ResourceID = "[WindowsFeature]IIS";
IncludeAllSubFeature = True;
Ensure = "Present";
SourceInfo = "D:\\DSC\\testconfig.ps1::5::9::WindowsFeature";
Name = "Web-Server";
ModuleName = "PsDesiredStateConfiguration";
ModuleVersion = "1.0";
ConfigurationName = "TestConfig";
};
instance of OMI_ConfigurationDocument
{
Version="2.0.0";
MinimumCompatibleVersion = "1.0.0";
CompatibleVersionAdditionalProperties= {"Omi_BaseResource:ConfigurationName"};
Author="P120bd0";
GenerationDate="02/25/2021 17:33:16";
GenerationHost="D-MJ05UA54";
Name="TestConfig";
};
After running the code I have got the following result
The configuration is created as expected, clicking on configuration entry in UI grid, leads to the following
Meaning that node configuration is created as well. My expectation was that for each VM I will see the Node configured to run configuration provided in mof file but Nodes UI shows empty Nodes
So I was trying to configure node manually to connect all peaces together
and that fails with the following
So I am totally confisued. On the one hand there's azurerm_virtual_machine_extension that allows to create extension and bind it to the automation account. In addition there are azurerm_automation_dsc_configuration and azurerm_automation_dsc_nodeconfiguration that allows to create configuration and node configuration. But the bottom line is that you cannot connect all those dots to be able to create node.
Just to confirm that configuration is valid, I create additional vm without using azurerm_virtual_machine_extension and I was able succesfully add this MV to created node configuration
The problem was in azurerm_virtual_machine_extension dsc_configuration parameter. The value needs to be the same as name property of the azurerm_automation_dsc_nodeconfiguration resource.
I am trying to provision an AWS Elastic Beanstalk using terrafrom. Below is the .tf file I have written:
resource "aws_s3_bucket" "default" {
bucket = "textX"
}
resource "aws_s3_bucket_object" "default" {
bucket = "${aws_s3_bucket.default.id}"
key = "test-app-version-tf--dev"
source = "somezipFile.zip"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
resource "aws_elastic_beanstalk_application" "tftest" {
name = "tf-test-name"
description = "tf-test-name"
}
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
description = "test"
application = "${aws_elastic_beanstalk_application.tftest.name}"
name = "synchronicity-dev"
cname_prefix = "ops-api-opstest"
solution_stack_name = "64bit Amazon Linux 2 v5.0.1 running Node.js 12"
tier = "WebServer"
wait_for_ready_timeout = "20m"
}
According to the official documentation, I am supplying all the Required arguments to aws_elastic_beanstalk_environment module.
However, upon executing the script, I am getting the following error:
Error waiting for Elastic Beanstalk Environment (e-39m6ygzdxh) to
become ready: 2 errors occurred:
* 2020-05-13 12:59:02.206 +0000 UTC (e-3xff9mzdxh) : You must specify an Instance Profile for your EC2 instance in this region. See
Managing Elastic Beanstalk Instance
Profiles
for more information.
* 2020-05-13 12:59:02.319 +0000 UTC (e-3xff9mzdxh) : Failed to launch environment.
This worked for me: add the setting below to your aws_elastic_beanstalk_environment resource:
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
....
....
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
}
More information on general settings here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html
Info on the aws_elastic_beanstalk_environment: https://www.terraform.io/docs/providers/aws/r/elastic_beanstalk_environment.html
resource "aws_elastic_beanstalk_application" "tftest" {
name = "name****"
description = "create elastic beanstalk applications"
tags = {
"Name" = "name*****"
}
}
resource "aws_elastic_beanstalk_environment" "env" {
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2 v3.3.15 running PHP 8.0"
name = "env-application"
tier = "WebServer"
setting {
namespace = "aws:ec2:vpc"
name = "Vpcid"
value = "mention vpc id"
}
setting {
namespace="aws:ec2:vpc"
name="Subnets"
value="mention subnet id"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
}
I have a question relating to AWS RDS cluster and instance creation.
Environment
We recently experimented with:
Terraform v0.11.11
provider.aws v1.41.0
Background
Creating some AWS RDS databases. Our mission was that in some environment (e.g. staging) we may run fewer instances than in others (e.g. production.). With this in mind and not wanting to have totally different terraform files per environment we instead decided to specify the database resources just once and use a variable for the number of instances which is set in our staging.tf and production.tf files respectively for the number of instances.
Potentially one more "quirk" of our setup, is that the VPC in which the subnets exist is not defined in terraform, the VPC already existed via manual creation in the AWS console, so this is provided as a data provider and the subnets for the RDS are specific in terraform - but again this is dynamic in the sense that in some environments we might have 3 subnets (1 in each AZ), whereas in others perhaps we have only 2 subnets. Again to achieve this we used iteration as shown below:
Structure
|-/environments
-/staging
-staging.tf
-/production
-production.tf
|- /resources
- database.tf
Example Environment Variables File
dev.tf
terraform {
terraform {
backend "s3" {
bucket = "my-bucket-dev"
key = "terraform"
region = "eu-west-1"
encrypt = "true"
acl = "private"
dynamodb_table = "terraform-state-locking"
}
version = "~> 0.11.8"
}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
version = "~> 1.33"
allowed_account_ids = ["XXX"]
}
module "main" {
source = "../../resources"
vpc_name = "test"
test_db_name = "terraform-test-db-dev"
test_db_instance_count = 1
test_db_backup_retention_period = 7
test_db_backup_window = "00:57-01:27"
test_db_maintenance_window = "tue:04:40-tue:05:10"
test_db_subnet_count = 2
test_db_subnet_cidr_blocks = ["10.2.4.0/24", "10.2.5.0/24"]
}
We came to this module based structure for environment isolation mainly due to these discussions:
https://github.com/hashicorp/terraform/issues/18632#issuecomment-412247266
https://github.com/hashicorp/terraform/issues/13700
https://www.terraform.io/docs/state/workspaces.html#when-to-use-multiple-workspaces
Our Issue
Initial resource creation works fine, our subnets are created, the database cluster starts up.
Our issues start the next time we subsequently run a terraform plan or terraform apply (with no changes to the files), at which point we see interesting things like:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
module.main.aws_rds_cluster.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
availability_zones.#: "3" => "1" (forces new resource)
availability_zones.1924028850: "eu-west-1b" => "" (forces new resource)
availability_zones.3953592328: "eu-west-1a" => "eu-west-1a"
availability_zones.94988580: "eu-west-1c" => "" (forces new resource)
and
module.main.aws_rds_cluster_instance.test_db (new resource required)
id: "terraform-test-db-dev" => (forces new resource)
cluster_identifier: "terraform-test-db-dev" => "${aws_rds_cluster.test_db.id}" (forces new resource)
Something about the way we are approaching this appears to be causing terraform to believe that the resource has changed to such an extent that it must destroy the existing resource and create a brand new one.
Config
variable "aws_availability_zones" {
description = "Run the EC2 Instances in these Availability Zones"
type = "list"
default = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
}
variable "test_db_name" {
description = "Name of the RDS instance, must be unique per region and is provided by the module config"
}
variable "test_db_subnet_count" {
description = "Number of subnets to create, is provided by the module config"
}
resource "aws_security_group" "test_db_service" {
name = "${var.test_db_service_user_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group" "test_db" {
name = "${var.test_db_name}"
vpc_id = "${data.aws_vpc.vpc.id}"
}
resource "aws_security_group_rule" "test_db_ingress_app_server" {
security_group_id = "${aws_security_group.test_db.id}"
...
source_security_group_id = "${aws_security_group.test_db_service.id}"
}
variable "test_db_subnet_cidr_blocks" {
description = "Cidr block allocated to the subnets"
type = "list"
}
resource "aws_subnet" "test_db" {
count = "${var.test_db_subnet_count}"
vpc_id = "${data.aws_vpc.vpc.id}"
cidr_block = "${element(var.test_db_subnet_cidr_blocks, count.index)}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
}
resource "aws_db_subnet_group" "test_db" {
name = "${var.test_db_name}"
subnet_ids = ["${aws_subnet.test_db.*.id}"]
}
variable "test_db_backup_retention_period" {
description = "Number of days to keep the backup, is provided by the module config"
}
variable "test_db_backup_window" {
description = "Window during which the backup is done, is provided by the module config"
}
variable "test_db_maintenance_window" {
description = "Window during which the maintenance is done, is provided by the module config"
}
data "aws_secretsmanager_secret" "test_db_master_password" {
name = "terraform/db/test-db/root-password"
}
data "aws_secretsmanager_secret_version" "test_db_master_password" {
secret_id = "${data.aws_secretsmanager_secret.test_db_master_password.id}"
}
data "aws_iam_role" "rds-monitoring-role" {
name = "rds-monitoring-role"
}
resource "aws_rds_cluster" "test_db" {
cluster_identifier = "${var.test_db_name}"
engine = "aurora-mysql"
engine_version = "5.7.12"
# can only request to deploy in AZ's where there is a subnet in the subnet group.
availability_zones = "${slice(var.aws_availability_zones, 0, var.test_db_instance_count)}"
database_name = "${var.test_db_schema_name}"
master_username = "root"
master_password = "${data.aws_secretsmanager_secret_version.test_db_master_password.secret_string}"
preferred_backup_window = "${var.test_db_backup_window}"
preferred_maintenance_window = "${var.test_db_maintenance_window}"
backup_retention_period = "${var.test_db_backup_retention_period}"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
storage_encrypted = true
kms_key_id = "${data.aws_kms_key.kms_rds_key.arn}"
deletion_protection = true
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
vpc_security_group_ids = ["${aws_security_group.test_db.id}"]
final_snapshot_identifier = "test-db-final-snapshot"
}
variable "test_db_instance_count" {
description = "Number of instances to create, is provided by the module config"
}
resource "aws_rds_cluster_instance" "test_db" {
count = "${var.test_db_instance_count}"
identifier = "${var.test_db_name}"
cluster_identifier = "${aws_rds_cluster.test_db.id}"
availability_zone = "${element(var.aws_availability_zones, count.index)}"
instance_class = "db.t2.small"
db_subnet_group_name = "${aws_db_subnet_group.test_db.name}"
monitoring_interval = 60
engine = "aurora-mysql"
engine_version = "5.7.12"
monitoring_role_arn = "${data.aws_iam_role.rds-monitoring-role.arn}"
tags {
Name = "test_db-${count.index}"
}
}
My question is, is there a way to achieve this so that terraform would not try to recreate the resource (e.g. ensure that the availability zones of the cluster and ID of the instance do not change each time we run terraform.
Turns out that simply by just removing the explicit availability zones definitions from the aws_rds_cluster and aws_rds_cluster_instance then this issue goes away and everything so far appears to work as expected. See also https://github.com/terraform-providers/terraform-provider-aws/issues/7307#issuecomment-457441633