How to deploy Kind : V2 connection using Terraforms? - azure

Trying to deploy Queue Api connection with Kind V2 as have to get runtime URL which is only possible if its Kind : V2 right now its get deployed as V1
resource "azurerm_api_connection" "azurequeuesconnect" {
name = "azurequeues"
resource_group_name = data.azurerm_resource_group.resource_group.name
managed_api_id = data.azurerm_managed_api.azurequeuesmp.id
display_name = "azurequeues"
parameter_values = {
"storageaccount" = data.azurerm_storage_account.blobStorageAccount.name
"sharedkey" = data.azurerm_storage_account.blobStorageAccount.primary_access_key
}
tags = {
"environment-id" = "testtag"
}
}

As far as I know, this is currently not possible. See the github issue
I had a similar problem, and to workaround it I've used an ARM template and the 'azurerm_resource_group_template_deployment' terraform module.
Here is a reference:
https://github.com/microsoft/AzureTRE/blob/main/templates/shared_services/airlock_notifier/terraform/airlock_notifier.tf#L58

This can be done by using arm template with terraform template deployments

Related

Automating Permissions for Databricks SQL Tables or Views

Trying to automate the setup of Databricks SQL.
I have done it from the UI and it works, so this is a natural next step.
The one thing I am unsure about is how to automate granting of the access to SQL tables and/or views using REST. I am trying to avoid a Notebooks job.
I have seen this microsoft documentation and downloaded the specification but when I opened it with Postman, I see permissions/objectType/Object id, but the only sample I have seen there is for "queries". It just seems to be applicable for Queries and Dashboards. Can't this be done for Tables and views? There is no further documentation that I could see.
So, basically how to do something like
grant select on tablename to group using REST api without using a Notebook job. I am interested to see if I can just call a REST endpoint from our release pipeline (Azure DevOps)
As of right now, there is no REST API for setting Table ACLs. But it's available as part of the Unity Catalog that is right now in the public preview.
If you can't use Unity Catalog yet, then you still have a possibility to automate assignment of Table ACLs by using databricks_sql_permissions resource of Databricks Terraform Provider - it sets permissions by executing SQL commands on a cluster, but this is hidden from administrator.
This is an extension to Alex Ott `s answer giving some details on what I tried to make the databricks_sql_permissions Resource work for Databricks SQL as was the OP's original question. All this assumes that one does not want/can use Unity Catalog which follows a different permission model and has a different Terraform resource, namely databricks_grants Resource.
Alex`s answer refers to table ACLs which had me surprised as the OP (and myself) were looking for Databricks SQL object security and not table ACLs in the classic workspace. But from what I understand so far, it seems the two are closely interlinked and the Terraform provider addresses table ACLs in the classic workspace (i.e. non-SQL) which are mirrored to SQL objects in the SQL workspace. It follows that if you like to steer SQL permissions in Databricks SQL via Terraform, you need to enable table ACLs in classic workspace (in admin console). If you (for whatever reason) cannot enable table ACLs, it seems to me the only other option is via sql scripts in the SQL workspace with the disadvantage of having to explicitly write out grants and revokes. Potentially an alternative is to throw away all permissions before one only runs grant statements but this has other negative implications.
So here is my approach:
Enable table ACL in classic workspace (this has no implications in classic workspace if you don`t use table ACL-enabled clusters afaik)
Use azurerm_databricks_workspace resource to register Databricks Azure infrastructure
Use databricks_sql_permissions Resource to manage table ACLs and thus SQL object security
Below is a minimal example that worked for me and may inspire others. It certainly does not follow Terraform config guidance but is merely used for minimal illustration.
NOTE: Due to a Terraform issue I had to ignore changes from attribute public_network_access_enabled, see GitHub issues: "azurerm_databricks_workspace" forces replacement on public_network_access_enabled while it never existed #15222
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
databricks = {
source = "databricks/databricks"
version = "=1.4.0"
}
}
backend "azurerm" {
resource_group_name = "tfstate"
storage_account_name = "tfsa"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
provider "azurerm" {
features {}
}
provider "databricks" {
azure_workspace_resource_id = "/subscriptions/mysubscriptionid/resourceGroups/myresourcegroup/providers/Microsoft.Databricks/workspaces/mydatabricksworkspace"
}
resource "azurerm_databricks_workspace" "adbtf" {
customer_managed_key_enabled = false
infrastructure_encryption_enabled = false
load_balancer_backend_address_pool_id = null
location = "westeurope"
managed_resource_group_name = "databricks-rg-myresourcegroup-abcdefg12345"
managed_services_cmk_key_vault_key_id = null
name = "mydatabricksworkspace"
network_security_group_rules_required = null
public_network_access_enabled = null
resource_group_name = "myresourcegroup"
sku = "premium"
custom_parameters {
machine_learning_workspace_id = null
nat_gateway_name = "nat-gateway"
no_public_ip = false
private_subnet_name = null
private_subnet_network_security_group_association_id = null
public_ip_name = "nat-gw-public-ip"
public_subnet_name = null
public_subnet_network_security_group_association_id = null
storage_account_name = "dbstorageabcde1234"
storage_account_sku_name = "Standard_GRS"
virtual_network_id = null
vnet_address_prefix = "10.139"
}
tags = {
creator = "me"
}
lifecycle {
ignore_changes = [
public_network_access_enabled
]
}
}
data "databricks_current_user" "me" {}
resource "databricks_sql_permissions" "database_test" {
database = "test"
privilege_assignments {
principal = "myuser#mydomain.com"
privileges = ["USAGE"]
}
}
resource "databricks_sql_permissions" "table_test_student" {
database = "test"
table = "student"
privilege_assignments {
principal = "myuser#mydomain.com"
privileges = ["SELECT", "MODIFY"]
}
}
output "adb_id" {
value = azurerm_databricks_workspace.adbtf.id
}
NOTE: Serge Smertin (Terraform Databricks maintainer) mentioned in GitHub issues: [DOC] databricks_sql_permissions Resource to be deprecated ? #1215 that the databricks_sql_permissions resource is deprecated but I could not find any indication about that in the docs, only a recommendation to use another resource when leveraging Unity Catalog which I'm not doing.

Specify proxy for helm repository in terraform for azure china

I am trying to deploy a helm chart via terraform to Azure Kubernetes Service in China. The problem is that I cannot pull images from k8s.gcr.io/ingress-nginx. I need to specify a proxy as described in https://github.com/Azure/container-service-for-azure-china/blob/master/aks/README.md#22-container-registry-proxy but I don't know how to do this via terraform. In west europe my resource simply looks like
resource "helm_release" "nginx_ingress" {
name = "ingress-nginx"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
namespace = kubernetes_namespace.nginx_ingress.metadata[0].name
set {
name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group"
value = azurerm_public_ip.nginx_ingress_pip.resource_group_name
}
set {
name = "controller.service.loadBalancerIP"
value = azurerm_public_ip.nginx_ingress_pip.ip_address
}
}
How do I get the proxy settings in there? Any help is greatly appreciated.
AFIK, Helm provider for terraform does not support proxy settings yet.
There is a pull request being discussed under this thread: https://github.com/hashicorp/terraform-provider-helm/issues/552
Until this feature is implemented you may consider other temporary workarounds like make a copy of the chart on your terraform repo and reference it from the helm provider.
Turns out I had some problems figuring out how to modify the helm chart in the correct way plus the solution was not exactly a proxy configuration but to directly use a different repository for the image pull. This works:
resource "helm_release" "nginx_ingress" {
name = "ingress-nginx"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
namespace = kubernetes_namespace.nginx_ingress.metadata[0].name
set {
name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group"
value = azurerm_public_ip.nginx_ingress_pip.resource_group_name
}
set {
name = "controller.service.loadBalancerIP"
value = azurerm_public_ip.nginx_ingress_pip.ip_address
}
set {
name = "controller.image.repository"
value = "k8sgcr.azk8s.cn/ingress-nginx/controller"
}
}
Thank you anyways for your input!

How to create Virtual servers in IBM cloud Terraform with for loop?

I have a Virtual server in IBM cloud created using Terraform
resource "ibm_is_instance" "vsi1" {
name = "${local.BASENAME}-vsi1"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
How to create Virtual Servers with Terraform For loops
vsi1 , vsi2, vsi3, vsi4, vsi5
for full code Please refer IBM Cloud Terraform getting started tutorial
You may not require a for or for-each loop for achieving what you need. A simple count will do the required. Once you add count(number of instances), all you need to do is pass count.index in the VSI name.
resource "ibm_is_instance" "vsi" {
count = 4
name = "${local.BASENAME}-vsi-${count.index}"
vpc = ibm_is_vpc.vpc.id
zone = local.ZONE
keys = [data.ibm_is_ssh_key.ssh_key_id.id]
image = data.ibm_is_image.ubuntu.id
profile = "cc1-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
This will create instances with names vsi-0,vsi-1...

Attach Leaf interface to EPG on Cisco ACI with Terraform

I'm trying to create an EPG on Cisco ACI using Terraform. EPG is created but Leaf's interface isn't attached.
The terraform synthax to attach Leaf interface is :
resource "aci_application_epg" "VLAN-616-EPG" {
...
relation_fv_rs_path_att = ["topology/pod-1/paths-103/pathep-[eth1/1]"]
...
}
It works when I do it manually through ACI web interface or REST API
I don't believe that this has been implemented yet. If you look in the code for the provider there is no test for that attribute, and I find this line in the examples for the EPGs. Both things lead me to believe it's not completed. Also, that particular item requires an encapsulation with VLAN/VXLAN, or QinQ, so that would need to be included if this was to work.
relation_fv_rs_path_att = ["testpathatt"]
Probably the best you could do is either make a direct REST call (act_rest in the terraform provider), or use an Ansible provider to create it (I'm investigating this now).
I ask to Cisco support and they send me this solution :
resource "aci_application_epg" "terraform-epg" {
application_profile_dn = "${aci_application_profile.terraform-app.id}"
name = "TerraformEPG1"
}
resource "aci_rest" "epg_path_relation" {
path = "api/node/mo/${aci_application_epg.terraform-epg.id}.json"
class_name = "fvRsPathAtt"
content = {
"encap":"vlan-907"
"tDn":"topology/pod-1/paths-101/pathep-[eth1/1]"
}
}
The solution with latest provider version is to do this:
data "aci_physical_domain" "physdom" {
name = "phys"
}
resource "aci_application_epg" "on_prem_epg" {
application_profile_dn = aci_application_profile.on_prem_app.id
name = "db"
relation_fv_rs_dom_att = [data.aci_physical_domain.physdom.id]
}
resource "aci_epg_to_domain" "rs_on_prem_epg_to_physdom" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = data.aci_physical_domain.physdom.id
}
resource "aci_epg_to_static_path" "leaf_101_eth1_23" {
application_epg_dn = aci_application_epg.on_prem_epg.id
tdn = "topology/pod-1/paths-101/pathep-[eth1/23]"
encap = "vlan-1100"
}

Terraform doesn't build triton machine

I've set my first steps into the world of terraform, I'm trying to deploy infrastructure on Joyent triton.
After setup, I wrote my first .tf (well, copied from the examples) and hit terraform apply. All seems to go well, it doesn't break on errors, but it doesn't actually provision my container ?? I doublechecked in the triton web gui and with "triton instance list". Nothing there.
Any ideas what's going on here ?
provider "triton" {
account = "tralala"
key_id = "my-pub-key"
url = "https://eu-ams-1.api.joyentcloud.com"
}
resource "triton_machine" "test-smartos" {
name = "test-smartos"
package = "g4-highcpu-128M"
image = "842e6fa6-6e9b-11e5-8402-1b490459e334"
tags {
hello = "world"
role = "database"
}
cns {
services = ["web", "frontend"]
}
}

Resources