I am a new in terraform and using below terraform template to create Azure App service plan, App service and App insight together
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98"
}
}
required_version = ">= 1.1.6"
}
provider "azurerm" {
features { }
}
resource "azurerm_application_insights" "appService-app_insights" {
name ="${var.prefix}-${var.App_Insights}"
location = var.Location
resource_group_name = var.ResourceGroup
application_type = "web" # Node.JS ,java
}
resource "azurerm_app_service" "appservice" {
name ="${var.prefix}-${var.appservice_name}"
location = var.Location
resource_group_name = var.ResourceGroup
app_service_plan_id = azurerm_app_service_plan.appserviceplan.id
https_only = true
site_config {
linux_fx_version = "NODE|10.14"
}
app_settings = {
# "SOME_KEY" = "some-value"
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.appService-app_insights.instrumentation_key
}
depends_on = [
azurerm_app_service_plan.appserviceplan,
azurerm_application_insights.appService-app_insights
]
}
# create the AppService Plan for the App Service hosting our website
resource "azurerm_app_service_plan" "appserviceplan" {
name ="${var.prefix}-${var.app_service_plan_name}"
location = var.Location
resource_group_name = var.ResourceGroup
kind ="linux"
reserved = true
sku {
tier = "Standard" #
size = "S1"
}
}
I am generating a variable.tf file at runtime which is quite simple in this case
variable "ResourceGroup" {
default = "TerraRG"
}
variable "Location" {
default = "westeurope"
}
variable "app_service_plan_name" {
default = "terra-asp"
}
variable "appservice_name" {
default = "terra-app"
}
variable "prefix" {
default = "pre"
}
variable "App_Insights" {
default = "terra-ai"
}
Everything working good till here.
No I am trying to extend my infra and I want to go with multiple App + App Service Plan + App Insight which might look like below Json
{
"_comment": "Web App Config",
"webapps": [
{
"Appservice": "app1",
"Appserviceplan": "asp1",
"InstrumentationKey": "abc"
},
{
"Appservice": "app2",
"Appserviceplan": "asp2",
"InstrumentationKey": "def"
},
{
"Appservice": "app3",
"Appserviceplan": "asp2",
"InstrumentationKey": "def"
}
]
}
How can I target such a resource creation.
Should I think on creating App Service Plan First and App Insight and then should plan creating Apps. What could be a better approach for this scenario.
Since app1,app2,app3 are not globally unique i have tried with different name.
I have tried with app service name testapprahuluni12345,testapp12346and testapp12347.
main.tf
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98"
}
}
}
provider "azurerm" {
features { }
}
resource "azurerm_application_insights" "appService-app_insights" {
name ="${var.prefix}-${var.App_Insights}"
location = var.Location
resource_group_name = var.ResourceGroup
application_type = "web" # Node.JS ,java
}
resource "azurerm_app_service_plan" "appserviceplan" {
count = length(var.app_service_plan_name)
name = var.app_service_plan_name[count.index]
location = var.Location
resource_group_name = var.ResourceGroup
kind ="linux"
reserved = true
sku {
tier = "Standard" #
size = "S1"
}
}
# create the AppService Plan for the App Service hosting our website
resource "azurerm_app_service" "appservice" {
count = length(var.app_names)
name = var.app_names[count.index]
location = var.Location
resource_group_name = var.ResourceGroup
app_service_plan_id = azurerm_app_service_plan.appserviceplan[count.index].id
https_only = true
site_config {
linux_fx_version = "NODE|10.14"
}
app_settings = {
# "SOME_KEY" = "some-value"
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.appService-app_insights.instrumentation_key
}
depends_on = [
azurerm_app_service_plan.appserviceplan,
azurerm_application_insights.appService-app_insights
]
}
variable.tf
variable "ResourceGroup" {
default = "v-XXXXX--ree"
}
variable "Location" {
default = "West US 2"
}
/*variable "app_service_plan_name" {
default = "terra-asp"
}
variable "appservice_name" {
default = "terra-app"
}
*/
variable "prefix" {
default = "pre"
}
variable "App_Insights" {
default = "terra-ai"
}
variable "app_names" {
description = "App Service Names"
type = list(string)
default = ["testapprahuluni12345", "testapp12346", "testapp12347"]
}
variable "app_service_plan_name" {
description = "App Service Plan Name"
type = list(string)
default = ["asp1", "asp2", "asp2"]
}
OutPut--
Related
I'm just starting with terraform and I just don't grasp some of the basics.
I have two problems
I have a main.tf file where I initialise Terraform, declare 2 providers for Azure and Kubernetes, and define two modules.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
provider "kubernetes" {
host = module.cluster.host
client_certificate = module.cluster.client_certificate
client_key = module.cluster.client_key
cluster_ca_certificate = module.cluster.cluster_ca_certificate
}
provider "azurerm" {
features {}
}
module "cluster" {
source = "./modules/cluster"
location = var.location
kubernetes_version = var.kubernetes_version
ssh_key = var.ssh_key
}
module "network" {
source = "./modules/network"
}
then in modules/cluster/cluster.tf I setup the resource group and the cluster.
when I define a new resource all azure and kubernetes providers modules are available.
I'm trying to add two new resources azurem_public_ip and an azurem_lb in modules/network/network.tf but the azurerm modules are not available to select
If I try and create those two resources in `modules/cluster/cluster.tf they are available
and I can write the so:
resource "azurerm_resource_group" "resource_group" {
name = var.resource_group_name
location = var.location
tags = {
Environment = "Production"
Team = "DevOps"
}
}
resource "azurerm_kubernetes_cluster" "server_cluster" {
...
}
resource "azurerm_public_ip" "public-ip" {
name = "PublicIPForLB"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
allocation_method = "Static"
}
resource "azurerm_lb" "load-balancer" {
name = "TestLoadBalancer"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
frontend_ip_configuration {
name = "PublicIPAddress"
public_ip_address_id = azurerm_public_ip.public-ip.id
}
}
Why aren't the providers available in network.tf?
Shouldn't they be available in the network module as they are in the cluster module?
From the docs network module should inherit the azurerm provider from main.tf
If not specified, the child >module inherits all of the >default (un-aliased) provider >configurations from the calling >module.
Many thanks for your help.
Tried to replicate the same scenario using below mentioned code. Everything is working as expected. Hope issue may cause because of module declaration.
Main tf code as follows:
provider "azurerm" {
features {}
}
provider "kubernetes" {
host = module.cluster.host
client_certificate = module.cluster.client_certificate
client_key = module.cluster.client_key
cluster_ca_certificate = module.cluster.cluster_ca_certificate
}
module "cluster" {
source = "./modules/cluster"
}
module "network" {
source = "./modules/network"
}
NOTE: No more parameters require on provider tf file. Just declaration of module is sufficient.
Provider tf file as follows:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
Cluster file as follows:
resource "azurerm_kubernetes_cluster" "example" {
name = "swarnaexample-aks1"
location = "East Us"
resource_group_name = "*********"
dns_prefix = "exampleaks1"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.example.kube_config.0.client_certificate
sensitive = true
}
output "kube_config" {
value = azurerm_kubernetes_cluster.example.kube_config_raw
sensitive = true
}
network tf file as follows:
resource "azurerm_resource_group" "example" {
name = "***********"
location = "East Us"
}
resource "azurerm_public_ip" "example" {
name = "swarnapIPForLB"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
allocation_method = "Static"
}
resource "azurerm_lb" "example" {
name = "swarnaLB"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
frontend_ip_configuration {
name = "PublicIPadd"
public_ip_address_id = azurerm_public_ip.example.id
}
}
from the code
Upon running of plan and apply
I would like a guide on how to automatically deploy to a newly provisioned aks cluster after provisioning with terraform. for more context, i am building a one click full infrastructure provisioning and deployment all in one script. below is my structure for more understanding
main.tf
resource "azurerm_kubernetes_cluster" "aks" {
name = var.cluster_name
kubernetes_version = var.kubernetes_version
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = var.cluster_name
default_node_pool {
name = "system"
node_count = var.system_node_count
vm_size = "Standard_DS2_v2"
type = "VirtualMachineScaleSets"
availability_zones = [1, 2, 3]
enable_auto_scaling = false
}
identity {
type = "SystemAssigned"
}
network_profile {
load_balancer_sku = "Standard"
network_plugin = "kubenet"
}
role_based_access_control {
enabled = true
}
}
output.tf
resource "local_file" "kubeconfig" {
depends_on = [azurerm_kubernetes_cluster.aks]
filename = "kubeconfig"
content = azurerm_kubernetes_cluster.aks.kube_config_raw
}
deployment.tf
resource "kubernetes_deployment" "sdc" {
metadata {
name = "sdc"
labels = {
app = "serviceName"
#version = "v1.0"
}
namespace = "default"
}
spec {
replicas = 1
selector {
match_labels = {
app = "serviceName"
}
}
template {
metadata {
labels = {
app = "serviceName"
# version = "v1.0"
}
}
spec {
container {
image = "myImage"
name = "serviceName"
port {
container_port = 80
}
}
}
}
}
depends_on = [
azurerm_kubernetes_cluster.aks
]
}
Everything works perfectly, my kubeconfig file is created and downloaded. my major headache is how to make the terraform apply process use the kubeconfig file created and also run the deployment. making my terraform script fully automated. I basically want to provision and deploy into the newly provisioned cluster all in one run.
Looking forward to good help.
Thanks guys
When I create a linked service in Azure Data Factory (ADF) for Databricks with terraform (using azurerm_data_factory_linked_service_azure_databricks) the linked service shows up only in live mode.
How can I make the linked service available in GIT mode where all the other ADF pipeline configurations are stored?
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.97.0"
}
databricks = {
source = "databrickslabs/databricks"
}
}
}
provider "azurerm" {
features {}
}
provider "databricks" {
host = azurerm_databricks_workspace.this.workspace_url
}
data "azurerm_client_config" "this" {
}
resource "azurerm_data_factory" "this" {
name = "myadf-9182371362"
resource_group_name = "testrg"
location = "East US"
identity {
type = "SystemAssigned"
}
vsts_configuration {
account_name = "mydevopsorg"
branch_name = "main"
project_name = "adftest"
repository_name = "adftest"
root_folder = "/adf/"
tenant_id = data.azurerm_client_config.this.tenant_id
}
}
resource "azurerm_databricks_workspace" "this" {
name = "mydbworkspace"
resource_group_name = "testrg"
location = "East US"
sku = "standard"
}
data "databricks_node_type" "smallest" {
local_disk = true
depends_on = [
azurerm_databricks_workspace.this
]
}
data "databricks_spark_version" "latest_lts" {
long_term_support = true
depends_on = [
azurerm_databricks_workspace.this
]
}
resource "databricks_cluster" "this" {
cluster_name = "Single Node"
spark_version = data.databricks_spark_version.latest_lts.id
node_type_id = data.databricks_node_type.smallest.id
autotermination_minutes = 20
spark_conf = {
"spark.databricks.cluster.profile" : "singleNode"
"spark.master" : "local[*]"
}
depends_on = [
azurerm_databricks_workspace.this
]
custom_tags = {
"ResourceClass" = "SingleNode"
}
}
data "azurerm_resource_group" "this" {
name = "testrg"
}
resource "azurerm_role_assignment" "example" {
scope = data.azurerm_resource_group.this.id
role_definition_name = "Contributor"
principal_id = azurerm_data_factory.this.identity[0].principal_id
}
resource "azurerm_data_factory_linked_service_azure_databricks" "msi_linked" {
name = "ADBLinkedServiceViaMSI"
data_factory_id = azurerm_data_factory.this.id
resource_group_name = "testrg"
description = "ADB Linked Service via MSI"
adb_domain = "https://${azurerm_databricks_workspace.this.workspace_url}"
existing_cluster_id = databricks_cluster.this.id
msi_work_space_resource_id = azurerm_databricks_workspace.this.id
}
result in git mode
result in live mode
I'm trying to use Terraform to configure Azure Media Services and can get the main bulk working e.g. resource group, storage account etc, but can't find any documentation or examples of creating a Content key policy, with DRM (Widervine) & JWT token, I'll also need to add this json for License settings:
{
"allowed_track_types": "SD_HD",
"content_key_specs": [
{
"track_type": "SD",
"security_level": 1,
"required_output_protection": {
"hdcp": "HDCP_NONE",
"cgms_flags": null
}
}
],
"policy_overrides": {
"can_play": true,
"can_persist": true,
"can_renew": false,
"rental_duration_seconds": 2592000,
"playback_duration_seconds": 10800,
"license_duration_seconds": 604800
}
}
Current main.tf file:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.26"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = join("", [ var.prefix, "mediaservices" ])
location = var.location
}
resource "azurerm_storage_account" "sa" {
name = join("", [ var.prefix, "storageaccount" ])
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "sc1" {
name = "archive"
storage_account_name = azurerm_storage_account.sa.name
container_access_type = "private"
}
resource "azurerm_storage_container" "sc2" {
name = "tobeprocessed"
storage_account_name = azurerm_storage_account.sa.name
container_access_type = "private"
}
resource "azurerm_media_services_account" "ams" {
name = join("", [ var.prefix, "mediaservices" ])
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
storage_account {
id = azurerm_storage_account.sa.id
is_primary = true
}
}
output "resourceGroupName" {
value = azurerm_resource_group.rg.name
}
output "containerConnectionString" {
value = azurerm_storage_account.sa.primary_connection_string
}
data "azurerm_subscriptions" "available" {
}
output "subscriptionId" {
value = data.azurerm_subscriptions.available.subscriptions[ 0 ].subscription_id
}
You can find documentation for Widevine licenses here: Media Services v3 with Widevine license template overview and here along with other content protection information.
Current state:
I have all services within a cluster and under just one resource_group. My problem is that I have to push all the services every time and my deploy is getting slow.
What I want to do: I want to split every service within my directory so I can deploy it separately. Now I have a backend to each service, so that can have his own remote state and won't change things when I deploy. However, can I push still have all the services within the same resource_group? If yes, how can I achieve that? If I need to create a resource group for each service that I want to deploy separately, can I still use the same cluster?
main.tf
provider "azurerm" {
version = "2.23.0"
features {}
}
resource "azurerm_resource_group" "main" {
name = "${var.resource_group_name}-${var.environment}"
location = var.location
timeouts {
create = "20m"
delete = "20m"
}
}
resource "tls_private_key" "key" {
algorithm = "RSA"
}
resource "azurerm_kubernetes_cluster" "main" {
name = "${var.cluster_name}-${var.environment}"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = "${var.dns_prefix}-${var.environment}"
node_resource_group = "${var.resource_group_name}-${var.environment}-worker"
kubernetes_version = "1.18.6"
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = "${trimspace(tls_private_key.key.public_key_openssh)} ${var.admin_username}#azure.com"
}
}
default_node_pool {
name = "default"
node_count = var.agent_count
vm_size = "Standard_B2s"
os_disk_size_gb = 30
}
role_based_access_control {
enabled = "false"
}
addon_profile {
kube_dashboard {
enabled = "true"
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "Standard"
}
timeouts {
create = "40m"
delete = "40m"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
tags = {
Environment = "Production"
}
}
provider "kubernetes" {
version = "1.12.0"
load_config_file = "false"
host = azurerm_kubernetes_cluster.main.kube_config[0].host
client_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].client_certificate,
)
client_key = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].client_key)
cluster_ca_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate,
)
}
backend.tf (for main)
terraform {
backend "azurerm" {}
}
client.tf (service that I want to deploy separately)
resource "kubernetes_deployment" "client" {
metadata {
name = "client"
labels = {
serviceName = "client"
}
}
timeouts {
create = "20m"
delete = "20m"
}
spec {
progress_deadline_seconds = 600
replicas = 1
selector {
match_labels = {
serviceName = "client"
}
}
template {
metadata {
labels = {
serviceName = "client"
}
}
}
}
}
}
resource "kubernetes_service" "client" {
metadata {
name = "client"
}
spec {
selector = {
serviceName = kubernetes_deployment.client.metadata[0].labels.serviceName
}
port {
port = 80
target_port = 80
}
}
}
backend.tf (for client)
terraform {
backend "azurerm" {
resource_group_name = "test-storage"
storage_account_name = "test"
container_name = "terraform"
key="test"
}
}
deployment.sh
terraform -v
terraform init \
-backend-config="resource_group_name=$TF_BACKEND_RES_GROUP" \
-backend-config="storage_account_name=$TF_BACKEND_STORAGE_ACC" \
-backend-config="container_name=$TF_BACKEND_CONTAINER" \
terraform plan
terraform apply -target="azurerm_resource_group.main" -auto-approve \
-var "environment=$ENVIRONMENT" \
-var "tag_version=$TAG_VERSION" \
PS: I can build the test resource-group from scratch if needed. Don't worry about his current state.
PS2: The state files are being saved into the right place, no issue about that.
If you want to deploy resources separately, you could take a look at terraform apply with this option.
-target=resource Resource to target. Operation will be limited to this
resource and its dependencies. This flag can be used
multiple times.
For example, just deploy a resource group and its dependencies like this,
terraform apply -target="azurerm_resource_group.main"