How do I connect azure sql database to function app in terraform - terraform

I am trying to connect sql database to function app on azure.
I tried using "storage_connection_string" key in terraform.It is still not working.
Could someone please help on the issue

I have a Function App deployed into Azure that's also using Azure SQL as well as a storage container. This is how it works for me. My terraform configuration is module-based so my modules for the database and storage accounts are separate, and they pass the required connection strings to my function app module:
resource "azurerm_function_app" "functions" {
name = "fcn-${var.environment}
resource_group_name = "${var.resource_group}"
location = "${var.resource_location}"
app_service_plan_id = "${var.appservice_id}"
storage_connection_string = "${var.storage_prim_conn_string}"
https_only = true
connection_string {
name = "SqlAzureDbConnectionString"
type = "SQLAzure"
value = "${var.fcn_connection_string}"
}
tags {
environment = "${var.environment}"
}
Just remember to check you have the module outputs as well as the variables in place.
Hope that helps.

Related

Dynamic workspace selection when importing state from s3

I am using below terraform datasource for importing shared state from s3. Terraform is giving me error " No stored state was found for the given workspace in the given backend". I am expecting terraform to pick up the workspace "dev-use1" as I have set the workspace using terraform workspace select "dev-use1".
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "analyticsjobs.tfstate"
workspace_key_prefix = "pipeline/v2/db"
region = "us-east-1"
}
}
Version = Terraform v1.1.9 on darwin_arm64
After enabling the DEBUG in terraform by setting TF_LOG="DEBUG". I can see that s3 api call is giving 404 error.
from the request xml I can see that the prefix is wrong.
As a workaround I have done below changes to datasource.
Not sure this is the recommended way of doing but it works. There is less clarity in docs regards to this https://www.terraform.io/language/state/remote-state-data
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "pipeline/v2/db/${terraform.workspace}/analyticsjobs.tfstate"
region = "us-east-1"
}
}

Google Cloud CloudSQL Instance Fails To Create using Terraform Provider With Error "Per-Product Per-Project Service Account is not found"

We're trying to deploy a Cloud SQL (MSSQL) instance using the google-beta provider with a private IP and after roughly four to five minutes it fails and throws the error "Error waiting for Create Instance: Per-Product Per-Project Service Account is not found"
I am able to create a Cloud SQL instance using the service account via the Cloud Shell CLI and manually in Console.
Has anyone encountered this before and can they provide any insights as to what may be going wrong?
If you look at the errored out resource in console, it appears to have mostly created but this error is shown.
resource "google_sql_database_instance" "cloud_sql_instance" {
provider = google-beta
name = var.cloud_sql_instance_name
region = var.gcp_region
database_version = var.cloud_sql_version
root_password = "wearenothardcodingplaceholdertest"
deletion_protection = var.delete_protection_enabled
project = var.gcp_project
settings {
tier = var.cloud_sql_compute_tier
availability_type = var.cloud_sql_availibility_type
collation = var.cloud_sql_collation
disk_autoresize = var.cloud_sql_auto_disk_resize
disk_type = var.cloud_sql_disk_type
active_directory_config {
domain = var.active_directory_domain
}
backup_configuration {
enabled = var.cloud_sql_backup_enabled
start_time = var.cloud_sql_backup_starttime
point_in_time_recovery_enabled = var.cloud_sql_pitr_enabled
transaction_log_retention_days = var.cloud_sql_log_retention_days
backup_retention_settings {
retained_backups = var.cloud_sql_backup_retention_number
retention_unit = var.cloud_sql_backup_retention_unit
}
}
ip_configuration {
ipv4_enabled = var.cloud_sql_backup_public_ip
private_network = data.google_compute_network.vpc_connection.self_link
require_ssl = var.cloud_sql_backup_require_ssl
allocated_ip_range = var.cloud_sql_ip_range_name
}
maintenance_window {
day = var.cloud_sql_patch_day
hour = var.cloud_sql_patch_hour
update_track = "stable"
}
}
}
I just ran into this issue. You need to create a Service Identity for sqladmin.googleapis.com.
resource "google_project_service_identity" "cloudsql_sa" {
provider = google-beta
project = "cool-project"
service = "sqladmin.googleapis.com"
}

Login error for admin SQL Server during terraform plan

I'm building an Azure infrastructure with terraform. I need to create a specific user of the DB for each DB in the server. To create the users I use the provider "betr-io / mssql", to create the users I use the following script:
resource "mssql_login" "sql_login" {
server {
host = "${var.sql_server_name}.database.windows.net"
# host = azurerm_mssql_server.sqlserver.fully_qualified_domain_name
login {
username = var.sql_admin_user
password = var.sql_admin_psw
}
}
login_name = var.sql_dbuser_username
password = var.sql_dbuser_password
depends_on = [azurerm_mssql_server.sqlserver, azurerm_mssql_database.sqldb]
}
resource "mssql_user" "sql_user" {
server {
host = "${var.sql_server_name}.database.windows.net"
# host = azurerm_mssql_server.sqlserver.fully_qualified_domain_name
login {
username = var.sql_admin_user
password = var.sql_admin_psw
}
}
username = var.sql_dbuser_username
password = var.sql_dbuser_password
database = var.sql_db_name
roles = var.sql_dbuser_roles
depends_on = [azurerm_mssql_server.sqlserver, azurerm_mssql_database.sqldb, mssql_login.sql_login]
}
What the terraform plan gives me is this error
Error: unable to read user [sqldb-dev].[dbuser]: login error: mssql: Login failed for user 'usr-admin'.
with mssql_user.sql_user,
on main.tf line 346, in resource "mssql_user" "sql_user":
346: resource "mssql_user" "sql_user" {
I can't understand the problem where it might come from, has anyone had a similar experience?
For completeness of information, the databases are hosted in an elastic pool instance.
The only solution I have found is to destroy the users and recreate them with the databases.
Unfortunately I haven't found a way to add devops to the sql server whitelist.

Local source of Terraform AWS provider

What specific syntax needs to be used in the example below in order for Terraform to source the AWS provider from a given path in the local file system instead of requesting a cloud copy from the remote Terraform Registry?
provider "aws" {
region = var._region
access_key = var.access_key
secret_key = var.secret_access_key
}
Something like src=C:\path\to\terraform\aws\provider\binary
I recall Mitchel Hashimoto explaining that this is a new feature during HashiConf, but I cannot seem to find documentation.
You should be able to set it in the CLI configuration as described in the documentation:
provider_installation {
filesystem_mirror {
path = "/usr/share/terraform/providers"
include = ["example.com/*/*"]
}
direct {
exclude = ["example.com/*/*"]
}
}

Two Terraform Resources referencing eachother

The goal is to set up two services, an Azure function app and a Cosmos DB. The Cosmos DB should allow traffic only from the function app, and the function app should use the key of the Cosmos DB to access it.
Relevant Terraform code
resource "azurerm_cosmosdb_account" "cosmosdb_account" {
...
ip_range_filter = azurerm_function_app.function_app.possible_outbound_ip_addresses
}
resource "azurerm_function_app" "function_app" {
...
app_settings = {
key = azurerm_cosmosdb_account.cosmosdb_account.primary_master_key
}
}
Error
Error: Cycle: azurerm_cosmosdb_account.cosmosdb_account, azurerm_function_app.function_app
Is there any way to do this without null_resources or weird hacks?

Resources