Two Terraform Resources referencing eachother - azure

The goal is to set up two services, an Azure function app and a Cosmos DB. The Cosmos DB should allow traffic only from the function app, and the function app should use the key of the Cosmos DB to access it.
Relevant Terraform code
resource "azurerm_cosmosdb_account" "cosmosdb_account" {
...
ip_range_filter = azurerm_function_app.function_app.possible_outbound_ip_addresses
}
resource "azurerm_function_app" "function_app" {
...
app_settings = {
key = azurerm_cosmosdb_account.cosmosdb_account.primary_master_key
}
}
Error
Error: Cycle: azurerm_cosmosdb_account.cosmosdb_account, azurerm_function_app.function_app
Is there any way to do this without null_resources or weird hacks?

Related

Is it possible to use 1password for Terraform provider credentials?

I'm trying to setup a Terraform configuration for Sonatype Nexus (among other things). Rather than providing my passwords directly, I want to get them from my 1Password system. The advantage for doing this is this Terraform config will live with alongside my broader infrastructure configuration, which includes the setup of the 1password Connect deployment.
My infrastructure CI/CD therefore already has environment variables set for the 1password credentials out of necessity, and it would be nice to make those the only variables I would need for anything. Hence trying to access this password from 1Password.
Below is my Terraform setup. As you can see, it gets the Nexus admin password from 1Password and tries to use it in the provider. However, when I run this Terraform script, it fails with a 401 response from Nexus when trying to create the blobstore.
To be honest, the 1Password Terraform documentation leaves much to be desired. I don't even know if I can configure a provider with data from another provider to begin with.
terraform {
backend "kubernetes" {
secret_suffix = "nexus-state"
config_path = "~/.kube/config"
}
required_providers {
nexus = {
source = "datadrivers/nexus"
version = "1.21.0"
}
onepassword = {
source = "1Password/onepassword"
version = "1.1.4"
}
}
}
provider "onepassword" {
url = "https://my-1password"
token = var.onepassword_token
}
data "onepassword_item" "nexus_admin" {
vault = "VAULT_UUID"
uuid = "ITEM_UUID"
}
provider "nexus" {
insecure = true
password = data.onepassword_item.nexus_admin.password
username = "admin"
url = "https://my-nexus"
}
resource "nexus_blobstore_file" "npm_private" {
name = "npm-private"
path = "/nexus-data/npm-private"
}

Login error for admin SQL Server during terraform plan

I'm building an Azure infrastructure with terraform. I need to create a specific user of the DB for each DB in the server. To create the users I use the provider "betr-io / mssql", to create the users I use the following script:
resource "mssql_login" "sql_login" {
server {
host = "${var.sql_server_name}.database.windows.net"
# host = azurerm_mssql_server.sqlserver.fully_qualified_domain_name
login {
username = var.sql_admin_user
password = var.sql_admin_psw
}
}
login_name = var.sql_dbuser_username
password = var.sql_dbuser_password
depends_on = [azurerm_mssql_server.sqlserver, azurerm_mssql_database.sqldb]
}
resource "mssql_user" "sql_user" {
server {
host = "${var.sql_server_name}.database.windows.net"
# host = azurerm_mssql_server.sqlserver.fully_qualified_domain_name
login {
username = var.sql_admin_user
password = var.sql_admin_psw
}
}
username = var.sql_dbuser_username
password = var.sql_dbuser_password
database = var.sql_db_name
roles = var.sql_dbuser_roles
depends_on = [azurerm_mssql_server.sqlserver, azurerm_mssql_database.sqldb, mssql_login.sql_login]
}
What the terraform plan gives me is this error
Error: unable to read user [sqldb-dev].[dbuser]: login error: mssql: Login failed for user 'usr-admin'.
with mssql_user.sql_user,
on main.tf line 346, in resource "mssql_user" "sql_user":
346: resource "mssql_user" "sql_user" {
I can't understand the problem where it might come from, has anyone had a similar experience?
For completeness of information, the databases are hosted in an elastic pool instance.
The only solution I have found is to destroy the users and recreate them with the databases.
Unfortunately I haven't found a way to add devops to the sql server whitelist.

Terraform backend SignatureDoesNotMatch

I'm pretty new to terraform, but I'm stuck trying to setup a terraform backend to use S3.
INIT:
terraform init -backend-config="access_key=XXXXXXX" -backend-config="secret_key=XXXXX"
TERRAFORM BACKEND:
resource "aws_dynamodb_table" "terraform_state_lock" {
name = "terraform-lock"
read_capacity = 5
write_capacity = 5
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
resource "aws_s3_bucket" "bucket" {
bucket = "tfbackend"
}
terraform {
backend "s3" {
bucket = "tfbackend"
key = "terraform"
region = "eu-west-1"
dynamodb_table = "terraform-lock"
}
}
ERROR:
Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
status code: 403, request id: xxxx-xxxx
I really am at a loss because these same credentials are used for my Terraform Infrastructure and is working perfectly fine. The IAM user on AWS also has permissions for both Dynamo & S3.
Am I suppose to tell Terraform to use a different authentication method?
Remove .terraform/ and try again and really double check your credentials.
I regenerate the access keys and password and works good.

How do I connect azure sql database to function app in terraform

I am trying to connect sql database to function app on azure.
I tried using "storage_connection_string" key in terraform.It is still not working.
Could someone please help on the issue
I have a Function App deployed into Azure that's also using Azure SQL as well as a storage container. This is how it works for me. My terraform configuration is module-based so my modules for the database and storage accounts are separate, and they pass the required connection strings to my function app module:
resource "azurerm_function_app" "functions" {
name = "fcn-${var.environment}
resource_group_name = "${var.resource_group}"
location = "${var.resource_location}"
app_service_plan_id = "${var.appservice_id}"
storage_connection_string = "${var.storage_prim_conn_string}"
https_only = true
connection_string {
name = "SqlAzureDbConnectionString"
type = "SQLAzure"
value = "${var.fcn_connection_string}"
}
tags {
environment = "${var.environment}"
}
Just remember to check you have the module outputs as well as the variables in place.
Hope that helps.

How to use terraform output as input variable of another terraform template

Is there any way I can use a Terraform template output to another Terraform template's input?
Ex: I have a Terraform template which creates an ELB and I have another Terraform template which is going to create an auto scale group which need the ELB information as an input variable.
I know I can use shell script to grep and feed in the ELB information, but I'm looking for some Terraform way to doing this.
Have you tried using remote state to populate your second template?
Declare it like this:
resource "terraform_remote_state" "your_state" {
backend = "s3"
config {
bucket = "${var.your_bucket}"
region = "${var.your_region}"
key = "${var.your_state_file}"
}
}
And then you should be able to pull out your resource directly like this:
your_elb = "${terraform_remote_state.your_state.output.your_output_resource}"
If this doesn't work for you, have you tried implementing your ELB in a module and then just using the output?
https://github.com/terraform-community-modules/tf_aws_elb is a good example of how to structure the module.
Looks like in newer versions of Terraform you'd access the output var like this
your_elb = "${data.terraform_remote_state.your_state.your_output_resource}"
All the rest is the same, just how you referenced it.
The question is about ELB, but I will give an example with S3. It is less things to write.
If you don't know how to store terraform state on AWS, read the article.
Let's suppose you have two independent projects: project-1, project-2. They are located in two different directories (two different repositories)!
Terraform file /tmp/project-1/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b1"
acl = "private"
}
// Output. It will available on s3://multi-terraform-project-state-bucket/p1.tfstate
output "bucket_name_p1" {
value = aws_s3_bucket.main_bucket.bucket
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
After it you move to the terraform file /tmp/project-2/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b2"
acl = "private"
tags = {
// Get the S3 bucket name from another terraform state file. In this case it is s3://multi-terraform-project-state-bucket/p1.tfstate
p1-bucket = data.terraform_remote_state.state1.outputs.bucket_name_p1
}
}
// Get date from another state file
data "terraform_remote_state" "state1" {
backend = "s3"
config = {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
region = "eu-central-1"
}
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p2.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
Now check the tags in the my-epic-test-b2. You will find there the name of the bucket from the project-1.
When you are integrating terraform with Jenkins you can simply define a variable in the Jenkinsfile you are creating. Suppose you want to run ec2-instance using terraform and Jenkinsfile. So when you need to get the public IP address of the instance you can use this command inside your Jenkinsfile.
script {
def public_ip = sh(script: 'terraform output public_ip | cut -d '"' -f2', returnStdout: true).trim()
}
this makes proper formatting and saves only the IP address in the public_ip variable. To do that you have to define the output block in the terraform script to output the public ip

Resources