I have 5 FunctionApps with One AppServicePlan[Premium] .
One VNET with one subnet already there[Not to be created], which to be used with FunctionApp along with storage.
When I try with azurerm_app_service_virtual_network_swift_connection it can not integrate all funcion apps.
Any solution for same or may be any code example link.
https://discuss.hashicorp.com/t/multiple-functionapp-on-single-appserviceplan-vnet-integration/43022?u=mukteswarp
This is the sample template for function app deployment via terraform. Please review your template with the sample template to see if there are any differences
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service_virtual_network_swift_connection#example-usage-with-function-app
if you have 5 function apps that are already created, then you should pull your data and iterate for each to do the entire association.
This is an example of how I would do it if I were to create a list of 1-2-3-4-5-6 or 110 function apps. where linux_app_name is the list of names -> type (list) , the reason why I did it in a local is because I'm checking before to see if it's empty or not. then for each app service id , doing the azurerm linux function app each key.id and of course the subnet that you're interested in.
Ensure you're putting the correct format of each value in the resource block. This serves purely as an example:
resource "azurerm_app_service_virtual_network_swift_connection" "funclinux" {
for_each = toset(local.linux_app_name)
app_service_id = azurerm_linux_function_app.linuxfunction[each.key].id
subnet_id = subnetid
}
Related
I'm not sure that there is a right answer for this and it will vary per scenario but I am curious because there isn't much documentation that I can find for a code first azure bicep infrastructure. Most examples you find show how to make a resource within a resource group, or using a module to define scope and deploy to another resource group, but what if you're trying to do more?
Let's do the following scenario: using 2 subscriptions(1 for prod, 1 for dev & qa) with 20 resource groups each containing multiple difference resources and you want to manage this within a CI/CD pipeline, plus the 3 environments: prod, qa, and dev. How would you go about this? I can think of a few scenarios but don't necessarily, but nothing sticks out as the best way to do it, maybe I'm missing something.
CI/CD portion:
Let's assume:
az account set --subscription(set our sub)
az group create --name --location (create resource group if it doesn't exist)
az deployment group create --name --resource-group --template-file --parameters(read from our files to deploy to a resource group)
You could pass an array of resource groups to loop through to create the resource group if it doesn't exist.
You could have the resource group list in a parameters file that you read from and do the same thing as above.
You could create a step for every resource group and the resources inside of it.(seems excessive?)
Bicep Portion:
Bicep restrictions: to specify scope(a resource group in our scenario) we'd have to have use modules dealing with multiple resource groups or have a step for each resource group and have a main.bicep file for the different resource groups/resources.
You could create a folder structure for each resource group and the resources inside of it with a main.bicep but that would mean you have a lot of extra deploy steps(seems excessive?).
You could have 1 main.bicep file and have a folder structure that uses a lot of modules to specify your scope while reading the resource group, resource variables etc using an environment parameters.json file.
You could create a folder for each environment, have folders with each environment then create each resource group and resources inside of it not using a parameters.json but using params in each file instead since they would be specific for each environment.
1 final issue:
Lastly let's say you want to add a step before the deployment of resources to use bicep what-if to check what resources will be updated or deleted(this is pretty important!). Last I checked there was an issue where what-if does not work for bicep modules so you wouldn't get the luxury of knowing what changes would be made prior to a deployment with the what-if. That is a pretty big safety net you'd be losing, so would you want to scratch the module strategy all together?
What would be the best way to tackle something like this while keeping it readable for average non experts to be able to hop in and work on it? I would lean towards making a folder structure using modules and reading from an environment parameters.json but I'm not convinced that's the best way, especially if what-if isn't fully working for bicep modules.
IMO this does depend a lot on the scenario, topology, permissions, etc. The way I would start thinking about this is that you want an "environment" that will vary a bit between dev/test and prod. That env has multiple resourceGroups and a dedicated subscription for each env.
In this case, I would use a single bicep "project" (e.g. main.bicep with modules) and change the deployment using parameter files (for dev/test vs. prod). The project would lay down everything needed for the environment (think greenfield). The main.bicep file is a subscription scoped deployment that will create the RGs and all the resources needed. Oversimplified example:
targetScope = 'subscription'
param sqlAdminUsername string'
param keyVaultResourceGroup
param keyVaultName string
param keyVaultSecretName string
param location string = deployment().location
resource kv 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
scope: resourceGroup(subscription().subscriptionId, keyVaultResourceGroup)
name: keyVaultName
}
resource sqlResourceGroup 'Microsoft.Resources/resourceGroups#2021-04-01' = {
name: 'shared-sql'
location: location
}
resource webResourceGroup 'Microsoft.Resources/resourceGroups#2021-04-01' = {
name: 'shared-web'
location: location
}
module sqlDeployment 'modules/shared-sql.bicep' = {
scope: resourceGroup(sqlResourceGroup.name)
name: 'sqlDeployment'
params: {
sqlAdminUsername: sqlAdminUsername
sqlAdminPassword: kv.getSecret(keyVaultSecretName)
location: location
}
}
module webDeployment 'modules/shared-web.bicep' = {
scope: resourceGroup(webResourceGroup.name)
name: 'webDeployment'
params: {
location: location
}
}
A single template + modules that creates the RGs, creates a SQL Server (via module) and an app service plan with an admin website (also via module). You can then parameterize whatever you want to for each environment.
re: what-if - what if will skip evaluation of a module if that module has a parameter that is an output of another module. If you don't pass outputs between modules then the module will be evaluated by what-if. The sample above does not pass outputs - often you don't need to do this because the information output was known by the parent (i.e. main.bicep) but sometimes you can't avoid it - ymmv.
Once you have the template designed in such a way, the pipeline is really straightforward - just deploy the template to the desired subscription.
That help?
I'm trying to build hub-spoke topology in Azure.
Hub VNET - includes Azure firewall with default rules, has it's own TF state file
Spoke VNET - includes other Azure resources (Blobs, Key vaults etc.), there are many Spoke VNETs (each per project/environment) each with it's own TF state file.
Problem: After deploying each Spoke VNET, there is randomly generated Blob Storage name which I need to pass and update Azure firewall rule in other TF configuration.
Question: Is it possible to do it automatically?
Possible solution: I will terraform apply Spoke VNET and use randomly generated blob storage name as an output. Pass it to .sh script which will update .tfvars file used by Hub VNET with Firewall. Then terraform apply this Hub VNET configuration.
I have to do this also in reverse while destroying any of the Spoke VNETs. But this is not very elegant. Is there any better way? Maybe using Terragrunt hooks?
In case of terragrunt, you can easily pass outputs from one module (i.e. Hub VNET) as inputs to the modules that depend on it (i.e. Spoke VNET). The code snippet would look like the following:
hub-vnet/terragrunt.hcl:
dependency "spoke-a-vnet" {
config_path = "../spoke-a-vnet"
mock_ouptuts = {
blob-name = ""
}
}
dependency "spoke-b-vnet" {
config_path = "../spoke-b-vnet"
mock_ouptuts = {
blob-name = ""
}
}
inputs {
blob-names = [dependency.spoke-a-vnet.outputs.blob-name, dependency.spoke-v-vnet.outputs.blob-name]
}
And then in your Hub VNET module you'll have a behavior configured that a blob-name should be skipped, if it equals "".
During the Spoke removal operation, you'll need to run two steps:
run destroy for the relevant Spoke VNET module
run apply afterwards (effectively it's a re-apply) for the Hub VNET module, where the mock value "" would take effect as the blob-storage input and therefore skipped (based on the conditional approach described above).
I am trying to obtain (via terraform) the dns name of a dynamically created VPCE endpoint using a data resource but the problem I am facing is the service name is not known until resources have been created. See notes below.
Is there any way of retrieving this information as a hard-coded service name just doesn’t work for automation?
e.g. this will not work as the service_name is dynamic
resource "aws_transfer_server" "sftp_lambda" {
count = local.vpc_lambda_enabled
domain = "S3"
identity_provider_type = "AWS_LAMBDA"
endpoint_type = "VPC"
protocols = ["SFTP"]
logging_role = var.loggingrole
function = var.lambda_idp_arn[count.index]
endpoint_details = {
security_group_ids = var.securitygroupids
subnet_ids = var.subnet_ids
vpc_id = var.vpc_id
}
tags = {
NAME = "tf-test-transfer-server"
ENV = "test"
}
}
data "aws_vpc_endpoint" "vpce" {
count = local.vpc_lambda_enabled
vpc_id = var.vpc_id
service_name = "com.amazonaws.transfer.server.c-001"
depends_on = [aws_transfer_server.sftp_lambda]
}
output "transfer_server_dnsentry" {
value = data.aws_vpc_endpoint.vpce.0.dns_entry[0].dns_name
}
Note: The VPCE was created automatically from an AWS SFTP transfer server resource that was configured with endpoint type of VPC (not VPC_ENDPOINT which is now deprecated). I had no control over the naming of the endpoint service name. It was all created in the background.
Minimum AWS provider version: 3.69.0 required.
Here is an example cloudformation script to setup an SFTP transfer server using Lambda as the IDP.
This will create the VPCE automatically.
So my aim here is to output the DNS name from the auto-created VPC endpoint using terraform if at all possible.
example setup in cloudFormation
data source: aws_vpc_endpoint
resource: aws_transfer_server
I had a response from Hashicorp Terraform Support on this and this is what they suggested:
you can get the service SFTP-Server-created-VPC-Endpoint by calling the following exported attribute of the vpc_endpoint_service resource [a].
NOTE: There are certain setups that causes AWS to create additional resources outside of what you configured. The AWS SFTP transfer service is one of them. This behavior is outside Terraform's control and more due to how AWS designed the service.
You can bring that VPC Endpoint back under Terraform's control however, by importing the VPC endpoint it creates on your behalf AFTER the transfer service has been created - via the VPCe ID [b].
If you want more ideas of pulling the service name from your current AWS setup, feel free to check out this example [c].
Hope that helps! Thank you.
[a] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint_service#service_name
[b] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#import
[c] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#gateway-load-balancer-endpoint-type
There is a way forward like I shared earlier with the imports but it not going to be fully automated unfortunately.
Optionally, you can use a provisioner [1] and the aws ec2 describe-vpc-endpoint-services --service-names command [2] to get the service names you need.
I'm afraid that's the last workaround I can provide, as explained in our doc here [3] - which will explain how - as much as we'd like to, Terraform isn't able to solve all use-cases.
[1] https://www.terraform.io/language/resources/provisioners/remote-exec
[2] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-vpc-endpoint-services.html
[3] https://www.terraform.io/language/resources/provisioners/syntax
I've finally found the solution:
data "aws_vpc_endpoint" "transfer_server_vpce" {
count = local.is_enabled
vpc_id = var.vpc_id
filter {
name = "vpc-endpoint-id"
values = ["${aws_transfer_server.transfer_server[0].endpoint_details[0].vpc_endpoint_id}"]
}
}
I want to create Azure EventGrid subscription using Terraform.
resource "azurerm_eventgrid_system_topic_event_subscription" "function_app" {
name = "RunOnBlobUploaded"
system_topic = azurerm_eventgrid_system_topic.function_app.name
resource_group_name = azurerm_resource_group.rg.name
included_event_types = [
"Microsoft.Storage.BlobCreated"
]
subject_filter {
subject_begins_with = "/blobServices/default/containers/input"
}
webhook_endpoint {
url = "https://thumbnail-generator-function-app.azurewebsites.net/runtime/webhooks/blobs?functionName=Create-Thumbnail&code=<BLOB-EXTENSION-KEY>"
}
}
By following this doc, I successfully deployed it and it works. However, the webhook_endpoint URL needs <BLOB-EXTENSION-KEY> which is hardcoded right now and found from the following place in the portal:
In order to not commit a secret to GitHub, I want to get this value by reference, ideally using Terraform.
According to my research, it seems there is no way in Terraform to reference that value.
The closest one is this data source azurerm_function_app_host_keys in Terraform. However, it doesn't cover the blobs_extension key!
Is there any good way to reference blobs_extension in Terraform without a hardcoded value?
Thanks in advance!
If TF does not support it yet, you can create your own External Data Source which is going to use azure cli or sdk to get the value you want, and return it to your TF for further use.
When working with azure sdk I see ListByResourceGroup(), GetByResourceGroup() return the same value. What is the difference between them? If I want to get all information of vm in my subcriptions, what function should I use?
T GetByResourceGroup(string resourceGroupName, string name)
IEnumerable<T> ListByResourceGroup(string resourceGroupName)
So, if you use GetByResourceGroup, you need to provide 2 parameters. One is the resource group name, the other is the name for the VM. And you will only get the target VM.
However, ListByResourceGroup only reqiure for the resource group name. And you will be able to get all the VMs in the resource group.
If you want to get all the VMs, you can use the following as a sample:
static void Main(string[] args)
{
IAzure azure = Azure.Authenticate("path_to_auth_file").WithDefaultSubscription();
var vms = azure.VirtualMachines.List();
foreach( var vm in vms)
{
Console.WriteLine(vm.Name);
}
}
I just simply output the VM's name, you can do what you want in the foreach.
Just to add to existing answer, in Azure SDK LIST always gets all resources from subscription or resource group (usually all resources of the same type), whereas GET is used to retrieve a specific resource (if it exists), hence you need to specify the resourceGroup AND the resource name.
I'd be inclined to think this is universal. If you think about it for a second those verbs in english are meant exactly for that.