I have a question regarding the Azure SQL database.
Is there anyway that we can create a table in Azure SQL database through a script.
Let's suppose that I provision it through terraform and I want to create a table as well while it is being provisioned through terraform.
So it can be fully automated.
Or is there any other option that we can create a table through github actions? because our code it on github and we are using github actions to provision our all azure resources.
resource "azurerm_mssql_server" "data_sql_server" {
name = "data-sql-server-${var.env}"
resource_group_name = var.resource_group_name
location = var.location
version = "12.0"
administrator_login = var.mssql_db_username
administrator_login_password = var.mssql_db_password
minimum_tls_version = "1.2"
public_network_access_enabled = true
tags = var.common_tags
}
resource "azurerm_mssql_database" "data_sql_db" {
name = "data-sql-db-${var.env}"
server_id = azurerm_mssql_server.data_sql_server.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
max_size_gb = 250
sku_name = "S1"
tags = var.common_tags
}
so here is the terraform code. What I want is, whenever it provisions the managed SQL server and managed database, it creates a table in the DB as well. So i don't have to create that table manually.
so answer is
you need to create a file .github/workflows/sql-deploy.yml
add this code
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: azure/login#v1 # Azure login required to add a temporary firewall rule
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/sql-action#v2
with:
connection-string: ${{ secrets.AZURE_SQL_CONNECTION_STRING }}
path: './script.sql'
and now you need to create 2 files in your github directory.
first file Database.sqlproj and here is the code
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build">
<Sdk Name="Microsoft.Build.Sql" Version="0.1.3-preview" />
<PropertyGroup>
<Name>reactions</Name>
<DSP>Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider</DSP>
<ModelCollation>1033, CI</ModelCollation>
</PropertyGroup>
</Project>
second file script.sql and here is the code
CREATE TABLE [dbo].[Product](
[ProductID] [int] IDENTITY(1,1) PRIMARY KEY NOT NULL,
[Name] [nvarchar](100) NOT NULL,
[ProductNumber] [nvarchar](25) NOT NULL,
[Color] [nvarchar](15) NULL,
[StandardCost] [money] NOT NULL,
[ListPrice] [money] NOT NULL,
[Size] [nvarchar](5) NULL,
[Weight] [decimal](8, 2) NULL,
[ProductCategoryID] [int] NULL,
[ProductModelID] [int] NULL,
[ModifiedDate] [datetime] NOT NULL
)
and here is, how you can make the connection string and save it into the github secrets.
Place the connection string from the Azure Portal in GitHub secrets as AZURE_SQL_CONNECTION_STRING. Connection string format is:
Server=<server.database.windows.net>;User ID=<user>;Password=<password>;Initial Catalog=<database>.
Related
I am setting up External-DNS with Terraform. Per the documentation, I have to manually create an azure.json file and mount it as a secret volume. The directions also state:
The Azure DNS provider expects, by default, that the configuration
file is at /etc/kubernetes/azure.json
{
"tenantId": "01234abc-de56-ff78-abc1-234567890def",
"subscriptionId": "01234abc-de56-ff78-abc1-234567890def",
"resourceGroup": "MyDnsResourceGroup",
"aadClientId": "01234abc-de56-ff78-abc1-234567890def",
"aadClientSecret": "uKiuXeiwui4jo9quae9o"
}
I then run kubectl create secret generic azure-config-file --from-file=/local/path/to/azure.json to mount the secret as a file.
The problem is that those values are dynamic, and I need to do this automatically per a CI/CD pipeline. I'm using Terraform Kubernetes resources, and here I've used the kubernetes_secret resource.
resource "kubernetes_secret" "azure_config_file" {
metadata {
name = "azure-config-file"
}
data = {
tenantId = data.azurerm_subscription.current.tenant_id
subscriptionId = data.azurerm_subscription.current.subscription_id
resourceGroup = azurerm_resource_group.k8s.name
aadClientId = azuread_application.sp_externaldns_connect_to_dns_zone.application_id
aadClientSecret = azuread_application_password.sp_externaldns_connect_to_dns_zone.value
}
depends_on = [
kubernetes_namespace.external_dns,
]
}
The secret gets mounted, but the pod never sees it and it results in a crashLoopBackoff. This may not be the best direction.
How do I automate this process with Terraform and get it mounted correctly?
For reference, this is the related section of the YAML manifest
...
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
items:
- key: externaldns-config.json
path: azure.json
This is the Terraform version of using the --from-file flag with kubectl.
Basically, you'll add the name of the file and its contents per the structure of the data block below.
resource "kubernetes_secret" "azure_config_file" {
metadata {
name = "azure-config-file"
}
data = { "azure.json" = jsonencode({
tenantId = data.azurerm_subscription.current.tenant_id
subscriptionId = data.azurerm_subscription.current.subscription_id
resourceGroup = data.azurerm_resource_group.rg.name
aadClientId = azuread_application.sp_externaldns_connect_to_dns_zone.application_id
aadClientSecret = azuread_application_password.sp_externaldns_connect_to_dns_zone.value
})
}
}
I'm attempting to deploy an aks cluster and role assignment for the system assigned managed identity that is created via terraform but I'm getting a 403 response
azurerm_role_assignment.acrpull_role: Creating...
╷
│ Error: authorization.RoleAssignmentsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '626eac40-c9dd-44cc-a528-3c3d3e069e85' with object id '626eac40-c9dd-44cc-a528-3c3d3e069e85' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/7b73e02c-dbff-4eb7-9d73-e73a2a17e818/resourceGroups/myaks-rg/providers/Microsoft.ContainerRegistry/registries/aksmattcloudgurutest/providers/Microsoft.Authorization/roleAssignments/c144ad6d-946f-1898-635e-0d0d27ca2f1c' or the scope is invalid. If access was recently granted, please refresh your credentials."
│
│ with azurerm_role_assignment.acrpull_role,
│ on main.tf line 53, in resource "azurerm_role_assignment" "acrpull_role":
│ 53: resource "azurerm_role_assignment" "acrpull_role" {
│
╵
This is only occurring in an Azure Devops Pipeline. My pipeline looks like the following...
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: TerraformInstaller#0
inputs:
terraformVersion: '1.0.7'
- task: TerraformCLI#0
inputs:
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
backendType: 'azurerm'
backendServiceArm: 'Matt Local Service Connection'
ensureBackend: true
backendAzureRmResourceGroupName: 'tfstate'
backendAzureRmResourceGroupLocation: 'UK South'
backendAzureRmStorageAccountName: 'tfstateq7nqv'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'terraform.tfstate'
allowTelemetryCollection: true
- task: TerraformCLI#0
inputs:
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
environmentServiceName: 'Matt Local Service Connection'
allowTelemetryCollection: true
- task: TerraformCLI#0
inputs:
command: 'validate'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
allowTelemetryCollection: true
- task: TerraformCLI#0
inputs:
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
environmentServiceName: 'Matt Local Service Connection'
allowTelemetryCollection: false
I'm using the terraform tasks from here - https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
This is my terraform file
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "TerraformCluster" {
name = "terraform-cluster"
location = "UK South"
}
resource "azurerm_kubernetes_cluster" "TerraformClusterAKS" {
name = "terraform-cluster-aks1"
location = azurerm_resource_group.TerraformCluster.location
resource_group_name = azurerm_resource_group.TerraformCluster.name
dns_prefix = "terraform-cluster-aks1"
network_profile {
network_plugin = "azure"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
data "azurerm_container_registry" "this" {
depends_on = [
azurerm_kubernetes_cluster.TerraformClusterAKS
]
provider = azurerm
name = "aksmattcloudgurutest"
resource_group_name = "myaks-rg"
}
resource "azurerm_role_assignment" "acrpull_role" {
scope = data.azurerm_container_registry.this.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.TerraformClusterAKS.identity[0].principal_id
}
Where am I going wrong here?
The Service Principal in AAD associated with the your ADO Service Connection ('Matt Local Service Connection') will need to be assigned the Owner role at the scope of the resource, or above (depending on where else you will be assigning permissions). You can read details about the various roles here the two most commonly used roles are Owner and Contributor, the key difference being that Owner allows for managing role assignments.
As part of this piece of work, you should also familiarize yourself with the principle of least privilege (if you do not already know about it). How it would apply in this case would be; if the Service Principal only needs Owner at the Resource level, then don't assign it Owner at the Resource Group or Subscription Level just because that is more convenient, you can always update the scope later on but it is much harder to undo any damage (assuming a malicious or inexperienced actor) on overly permissive role assignment after it has been exploited.
I tried everything to get this existing storage service to re-connect to the Azure Devops Pipeline to enable Terraform deployments.
Attempted and did not work: Break lease on tf state, remove tf state, update lease on tfstate, inline commands in ADO via powershell and bash to Purge terraform, re-install the Terraform plugin etc. etc. etc.)
What worked:
What ended up working is to create a new storage account with a new storage container and new SAS token.
This worked to overcome the 403 forbidden error on the access of the Blob containing the TFState in ADLS from Azure Devops for Terraform deployments. This does not explain the how or the why, access controls / iam / access policies did not change. A tear down and recreate of the storage containing the TFState with the exact same settings under a difference storage account name worked.
I am trying to execute terraform scripts through azure devops. I am not able to apply and validate through different tasks, though terraform plan is successful terraform apply is failing with
##[error]TypeError: Cannot read property 'includes' of null
Here is the terraform tasks which I am using. I have tried with two different tasks
1.
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2#2
displayName: 'Terraform : apply -auto-approve'
inputs:
command: apply
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform'
commandOptions: '-auto-approve'
environmentServiceNameAzureRM: 'ps-vs-sc'
backendAzureRmResourceGroupName: '$(rgname)'
backendAzureRmStorageAccountName: $(strname)
backendAzureRmContainerName: $(tfContainer)
backendAzureRmKey: '$(storagekey)'
- task: TerraformTaskV2#2
inputs:
provider: 'azurerm'
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform'
commandOptions: '--auto-approve'
environmentServiceNameAzureRM: 'ps-vs1-sc'
Here is my terraform file
provider "azurerm" {
features {}
}
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.74.0"
}
}
}
data "azurerm_api_management" "example" {
name = var.apimName
resource_group_name = var.rgName
}
resource "azurerm_api_management_api" "example" {
name = var.apName
resource_group_name = var.rgName
api_management_name = var.apimname
revision = "1"
display_name = "Example API1"
path = "example1"
protocols = ["https"]
service_url = "http://123.0.0.0:8000"
subscription_required = true
import {
content_format = "openapi+json"
content_value = #{storageaccountlink}#
}
The terraform file looks fine, there is no issue with it. Can you check if you are using the Azure Service Principal method.
A Service Principal is considered a good practice for DevOps within your CI/CD pipeline. It is used as an identity to authenticate you within your Azure Subscription to allow you to deploy the relevant Terraform code.
You can follow this document for the detailed demo for Deploying Terraform using Azure DevOps.
If you have already followed the above steps and still facing the same issue then the problem is with the the 'Configuration directory' setting in the Terraform 'Validate and Apply' Release Pipeline step. Changing it to the path containing the build artifacts will fix the issue. Also check if the workingDirectory which you have provided is correct.
A similar but not same problem was also raised here, check if it solves the problem.
I have added the entire pipeline details here just for the reference - https://gist.github.com/PrakashRajanSakthivel/e6d8e03044a51d74803499aca75a258c
I have the following piece of Terraform code where Terraform fetches the sql admin password from a key vault. When I changed the administrator login and passsword in the key vault and then run terraform again to update the sql server, it destroys the sql database and sql server.
Is this standard procedure or can I change this behavior? One could understand that recreating the resources is not really feasible in a production environment. I know a lifecycle hook could prevent the deletion of a resource, but such a thing would then break the pipeline if I am correct.
data "azurerm_key_vault_secret" "sql_admin_user_secret" {
name = var.sql_admin_user_secret_name
key_vault_id = data.azurerm_key_vault.key_vault.id
}
data "azurerm_key_vault_secret" "sql_admin_password_secret" {
name = var.sql_admin_password_secret_name
key_vault_id = data.azurerm_key_vault.key_vault.id
}
resource "azurerm_sql_server" "sql_server" {
name = var.sql_server_name
resource_group_name = var.resource_group_name
location = var.location
version = var.sql_server_version
administrator_login = data.azurerm_key_vault_secret.sql_admin_user_secret.value
administrator_login_password = data.azurerm_key_vault_secret.sql_admin_password_secret.value
}
resource "azurerm_sql_database" "sql_database" {
name = var.sql_database_name
resource_group_name = var.resource_group_name
location = var.location
server_name = azurerm_sql_server.sql_server.name
edition = var.sql_edition
requested_service_objective_name = var.sql_service_level
}
I could add something like this, but this only prevents a destroy and ignores changes in those fields respectively. Which is again, not really an option.
lifecycle {
prevent_destroy = true
ignore_changes = ["administrator_login", "administrator_login_password"]
}
Update:
The way of working is to never update the administrator_login. administrator_login_password can be updated separately, which doesn't cause the instance to be recreated.
As per the official doc, if you change the administrator_login, it is expected the resource to be recreated. However, if you only change administrator_login_password, it should get updated.
administrator_login - (Required) The administrator login name for the new server. Changing this forces a new resource to be created.
There is nothing much can be done here since Terraform is communicating with the Azure API, which is not designed to update the administrator user id of Azure SQL without creating a new resource.
Within API management, I created an API that enables to call a serverless function app. Now I would like to deploy automatically this functionnality. Here are the possibilities I saw on internet :
Create and configure the api management through the portal (this is not what I call an automatic deployment)
Use Powershell command (unfortunally I am working with linux)
ARM (Azure Resource Manager): this is not easy and I did find how to create an API with Azure function app
Terraform: same as ARM, it is not clear for me how to create an API with Azure function app
If someone has an experience, links or ideas I would be very thankful.
Regards,
We are currently using Terraform for all our Azure infrastructure including API Management and I would strongly recommend it.
It is creating and updating everything we want, including api policies and has a relativity small learning curve.
You can start learning here:
https://learn.hashicorp.com/terraform?track=azure#azure
The docs for APIM are here:
https://www.terraform.io/docs/providers/azurerm/r/api_management.html
Once the initial learning curve is done the rest is easy and the benefits are many.
Azure Powershell is 100% cross platform now, so that's an option. Here are some samples: https://learn.microsoft.com/en-us/azure/api-management/powershell-samples
You can also use ARM Templates to spin it up. Configuring it is a lot harder. You can map any of these calls to the ARM Template.
Terraform - i think its still in the works. https://github.com/terraform-providers/terraform-provider-azurerm/issues/1177. But I wouldnt go that way.
ARM is the way to go.
You can combine it with:
Azure resource manager API for deployment
API Management API for things that ARM doesn't support (yet)
Have a look at the Azure API Management DevOps Resource Kit:
https://github.com/Azure/azure-api-management-devops-resource-kit
I believe the most convenient way for automating the deployment of Azure APIM is dotnet-apim. It's a cross-platform solution that you can easily use on your dev machine or ci/cd pipeline.
Make sure you have .NET Core installed.
Install dotnet-apim tool.
In a yaml file, you define the list of APIVersionSets, APIs, Products, Backends, Tags, etc. This YAML file defines what you want to deploy to APIM. You can have it on your source control to take the history of changes. The following YAML file defines 2 Sets, APIs, and Products along with their policies.
version: 0.0.1 # Required
apimServiceName: $(apimServiceName) # Required, must match name of an apim service deployed in the specified resource group
apiVersionSets:
- name: Set1
displayName: API Set 1
description: Contains Set 1 APIs.
versioningScheme: Segment
- name: Set2
displayName: API Set 2
description: Contains Set 2 APIs.
versioningScheme: Segment
apis:
- name: API1
displayName: API v1
openApiSpec: $(apimBasePath)\Apis\OpenApi.json # Required, can be url or local file
policy: $(apimBasePath)\Apis\ApiPolicy.xml
path: api/sample1
apiVersion: v1
apiVersionSetId: Set1
apiRevision: 1
products: AutomationTests, SystemMonitoring
protocols: https
subscriptionRequired: true
isCurrent: true
operations:
customer_get: # it's operation id
policy: $(apimBasePath)\Apis\HealthCheck\HealthCheckPolicy.xml:::BackendUrl=$(attachmentServiceUrl)
subscriptionKeyParameterNames:
header: ProviderKey
query: ProviderKey
- name: API2
displayName: API2 v1 [Staging]
openApiSpec: $(apimBasePath)\Apis\OpenApi.json # Required, can be url or local file
policy: $(apimBasePath)\Apis\ApiPolicy.xml
path: api/sample2
apiVersion: v1
apiVersionSetId: Set2
apiRevision: 1
products: AutomationTests, SystemMonitoring
protocols: https
subscriptionRequired: true
isCurrent: true
subscriptionKeyParameterNames:
header: ProviderKey
query: ProviderKey
products:
- name: AutomationTests
displayName: AutomationTests
description: Product for automation tests
subscriptionRequired: true
approvalRequired: true
subscriptionsLimit: 1
state: published
policy: $(apimBasePath)\Products\AutomationTests\policy.xml
- name: SystemMonitoring
displayName: SystemMonitoring
description: Product for system monitoring
subscriptionRequired: true
approvalRequired: true
subscriptionsLimit: 1
state: published
policy: $(apimBasePath)\Products\SystemMonitoring\policy.xml
outputLocation: $(apimBasePath)\output
linkedTemplatesBaseUrl : $(linkedTemplatesBaseUrl) # Required if 'linked' property is set to true
the $(variableName) is a syntax for defining variables inside the YAML file which makes customization easier in ci/cd scenarios.
The next step is to transform the YAML file to ARM which Azure can understand.
dotnet-apim --yamlConfig "c:/apim/definition.yml"
Then you have to deploy the generated ARM templates to Azure which is explained here.
Using the test-api (if you do the microsoft demo for API Mgmt, you should recognize it), here's a snippet of terraform that does work. does not include the resource group (thisrg)
resource "azurerm_api_management" "apimgmtinstance" {
name = "${var.base_apimgmt_name}-${var.env_name}-apim"
location = azurerm_resource_group.thisrg.location
resource_group_name = azurerm_resource_group.thisrg.name
publisher_name = "Marc Pub"
publisher_email = "marc#trash.com"
sku_name = var.apimgmt_size
/* policy {
xml_content = <<XML
<policies>
<inbound />
<backend />
<outbound />
<on-error />
</policies>
XML
} */
}
resource "azurerm_api_management_product" "apiMgmtProductContoso" {
product_id = "contoso-marc"
display_name = "Contoso Marc"
description = "this is a test"
subscription_required = true
approval_required = true
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
published = true
subscriptions_limit = 2
terms = "you better accept this or else... ;-)"
}
resource "azurerm_api_management_api" "testapi" {
description = "this is a mock test"
display_name = "Test API"
name = "test-api"
protocols = ["https"]
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
// version = "0.0.1"
revision = "1"
path = ""
subscription_required = true
}
data "azurerm_api_management_api" "testapi_data" {
name = azurerm_api_management_api.testapi.name
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
revision = "1"
}
resource "azurerm_api_management_api_operation" "testapi_getop" {
operation_id = "test-call"
api_name = data.azurerm_api_management_api.testapi_data.name
api_management_name = data.azurerm_api_management_api.testapi_data.api_management_name
resource_group_name = data.azurerm_api_management_api.testapi_data.resource_group_name
display_name = "Test call"
method = "GET"
url_template = "/test"
description = "test of call"
response {
status_code = 200
description = ""
representation {
content_type = "application/json"
sample = "{\"sampleField\": \"test\"}"
}
}
}
resource "azurerm_api_management_api_operation_policy" "testapi_getop_policy" {
api_name = azurerm_api_management_api_operation.testapi_getop.api_name
api_management_name = azurerm_api_management_api_operation.testapi_getop.api_management_name
resource_group_name = azurerm_api_management_api_operation.testapi_getop.resource_group_name
operation_id = azurerm_api_management_api_operation.testapi_getop.operation_id
xml_content = <<XML
<policies>
<inbound>
<mock-response status-code="200" content-type="application/json"/>
</inbound>
</policies>
XML
}
terraform now mostly supports azure api management. I've been implementing most of an existing azure api management into terraform using a combination of the https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/api_management along with terraform import (just do it in a separate folder, terraform import imports into a terraform.tfstate file, and if you mix up the resource you are importing along with the tf file(s) you are creating as a result's terraform.tfstate file (generated via terraform plan/apply), you could accidently delete the resource you are importing from. yay...
It mostly does it, except for an API where you modified for 'All operations". I can do it for specific operations (get blah, or post blahblah), but for all operations... I don't yet know.