Restore database on another server in azure with terraform - terraform

How do you restore an azure sql database using terraform on another server from a backup?
Terraform docs talk about a create mode "RestoreExternalBackup". How could one use that?
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_database

I researched this issue and found it to be a discrepancy between the Terraform and Azure documentation. I also opened an issue on the GitHub repo with this info.
RestoreExternalBackup is listed as a possible value for CreateMode in the Azure API documentation for databases. However, the create mode documentation doesn't describe how to use it. This option should not be available.
Looking at the Managed Database documentation, it clearly defines how to use the RestoreExternalBackup option. Oddly enough the Terraform documentation doesnt list any create modes for managed databases. https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_managed_database
When trying to use RestoreExternalBackup to create a database the error indicates that option requires a pointer to a storage account with the message "Missing storage container URI". Storage account information is not a valid property when creating a database resource, only for a managed database resource.
Database = https://learn.microsoft.com/en-us/rest/api/sql/2022-05-01-preview/databases/create-or-update?tabs=HTTP
Managed Database Docs = https://learn.microsoft.com/en-us/rest/api/sql/2022-05-01-preview/managed-databases/create-or-update?tabs=HTTP

Related

ADF Pipeline Errors - RequestContentTooLarge and InvalidContentLink

The ADF Pipeline release to the test Data Factory instance is failing with the following error as shown in the image below.
So, to overcome the above issue, I modified the pipeline by adding an additional step of Azure Blob File Copy to store the linked templates in a storage account and reference it in the pipeline to use it for the deployment. However when I made the above change I am getting another error which states InvalidContentLink: Unable to download deployment content from 'https://xxx.blob.core.windows.net/adf-arm-templates/ArmTemplate_0.json?***Sanitized Azure Storage Account Shared Access Signature***'. The tracking Id is 'xxxxx-xxxx-x-xxxx-xx'. Please see https://aka.ms/arm-deploy for usage details.
I have tried using the SAS token for both at the Container level and at the Storage Account level. I also have ensured that the agent and the storage account are under same VNets. I have also tried to remove the firewall restrictions but still it gives me the same InvalidContentLink error.
The modified pipeline with the Azure Storage Account step :
How do I resolve this issue?
InvalidContentLink: Unable to download deployment content from 'https://xxx.blob.core.windows.net/adf-arm-templates/ArmTemplate_0.json?Sanitized Azure Storage Account Shared Access Signature'. The tracking Id is 'xxxxx-xxxx-x-xxxx-xx'. Please see https://aka.ms/arm-deploy for usage details.
This error can cause because of you are trying to link which might not present in storage account.
Make sure you provide correct URl for the nested template that is accessible.
Also, if your storage account has firewall rule you can't link nested template from it.
Make sure your Storage Account, Container and Blob are publicly available. To achieve this:
Provide a Blob level Shared Access Signature URL. select the file click on"..." and then Click on Generate SAS.
refer for more understanding about nested template.

Accessing Azure Storage Blob from an AKS cluster

A little context: I'm having to migrate a project from AWS, where I'm currently using ECS, to Azure, where I'll be using AKS since their ACS (ECS equivalent) is deprecated.
This is a regular Django app, with its configuration variables being fetched from a server-config.json hosted on a private S3 bucket, the EC2 instance has the correct role with S3FullAccess,
I've been looking into reproducing that same behavior but with Azure Blob Storage instead, having achieved no success whatsoever :-(.
I tried using the Service Principal concept and adding it to the AKS Cluster with Storage Blob Data Owner roles, but that doesn't seem to work. Overall it's been quite the frustrating experience - maybe I'm just having a hard time grasping the right way to use the permissions/scopes. The fact that the AKS Cluster creates its own resource group is something unfathomable - but I've attempted attaching the policies to it as well, to no avail. I then moved onto a solution indicated by Microsoft.
I managed to bind my AKS pods with the correct User Managed Identity through their indicated solution aad-pod-identity, but I feel like I'm missing something. I assigned Storage Blob Data Owner/Contributor to the identity, but still, when I enter the pods and try to access a Blob (using the python sdk), I get a resource not found message.
Is what I'm trying to achieve possible at all? Or will I have to change to a solution using Azure Keyvault/something along those lines?
first off all, you can use AKS Engine which is more or less ACS for Kubernetes now.
As for the access to the blob storage, you dont have to use Managed Service Identity, you can just use account name\key ( which is a bit less secure, but a lot less error prone and more examples exist ). The fact that you are getting resource not found error most likely means your auth part is fine, you just dont have access to the resource, according to this storage blob contributor should be fine if you assigned it at a proper scope. For this to work 100% just give your identity contributor access at subscription level, this way its guaranteed to work.
I've found an example of using python with MSI (here). You should start with that (and grant your identity contributor access) and verify you can list resource groups. when that works making reading blobs working should be trivial.

Hashicorp Terraform Remote State and Azure

I realize that Terraform supports Azure, and I've actually been able to get Terraform working with Azure by doing the following:
Create a storage account
Create a blob container
Plugged in access key
Created a file titled backend.tfvars with resource_group_name, storage_account_name, container_name, access_key, key values.
Added following to main.tf:
Main.tf
terraform {
backend "azurerm" {
}
}
I ran terraform init -backend-config="backend.tfvars"
When I look in the blob container, I see the myapp.tfstate file, which means that I've been successful, right?
What exactly does this allow me? I understand that my state file is now saved in Azure, but... how does that help me? I've looked around for documentation explaining this, but for some reason haven't been able to find anything.
Charles is right about nothing store "just" being in another place, but he is wrong there is no difference. There is a difference. Main difference is if you have a team of people working with TF.
You see, state is not only used to store state, but to signal that currently there is an operation going on. Called locking. With centralized storage none of your teammates can accidentally try and change resources when somebody else is doing that already.
Actually, store the Terraform in Azure Storage Account, I think it's no different with local, just replaced the place. But according to the description in the document:
By default, data stored in an Azure Blob is encrypted before being
persisted to the storage infrastructure. When Terraform needs state,
it is retrieved from the backend and stored in memory on your
development system. In this configuration, the state is secured in Azure
Storage and not written to your local disk.
It seems that there is still a little effect on the security of the data.

How can I programatically (C#) read the autoscale settings for a WebApp?

I'm trying to build a small program to change the autoscale settings for our Azure WebApps, using the Microsoft.WindowsAzure.Management.Monitoring and Microsoft.WindowsAzure.Management.WebSites NuGet packages.
I have been roughly following the guide here.
However, we are interested in scaling WebApps / App Services rather than Cloud Services, so I am trying to use the same code to read the autoscale settings but providing a resource ID for our WebApp. I have already got the credentials required for making a connection (using a browser window popup for Active Directory authentication, but I understand we can use X.509 management certificates for non-interactive programs).
This is the request I'm trying to make. Credentials already established, and an exception is thrown earlier if they're not valid.
AutoscaleClient autoscaleClient = new AutoscaleClient(credentials);
var resourceId = AutoscaleResourceIdBuilder.BuildWebSiteResourceId(webspaceName: WebSpaceNames.NorthEuropeWebSpace, serverFarmName: "Default2");
AutoscaleSettingGetResponse get = autoscaleClient.Settings.Get(resourceId); // exception here
The WebApp (let's call it "MyWebApp") is part of an App Service Plan called "Default2" (Standard: 1 small), in a Resource Group called "WebDevResources", in the North Europe region. I expect that my problem is that I am using the wrong names to build the resourceId in the code - the naming conventions in the library don't map well onto what I can see in the Azure Portal.
I'm assuming that BuildWebSiteResourceId is the correct method to call, see MSDN documentation here.
However the two parameters it takes are webspaceName and serverFarmName, neither of which match anything in the Azure portal (or Google). I found another example which seemed to be using the WebApp's geo region for webSpaceName, so I've used the predefined value for North Europe where our app is hosted.
While trying to find the correct value for serverFarmName in the Azure Portal, I found the Resource ID for the App Service Plan, which looks like this:
/subscriptions/{subscription-guid}/resourceGroups/WebDevResources/providers/Microsoft.Web/serverfarms/Default2
That resource ID isn't valid for the call I'm trying to make, but it does support the idea that a 'serverfarm' is the same as an App Service Plan.
When I run the code, regardless of whether the resourceId parameters seem to be correct or garbage, I get this error response:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
{"Code":"SettingNotFound","Message":"Could not find the autoscale settings."}
</string>
So, how can I construct the correct resource ID for my WebApp or App Service Plan? Or alternatively, is there a different tree I should be barking up to programatially manage WebApp scaling?
Update:
The solution below got the info I wanted. I also found the Azure resource explorer at resources.azure.com extremely useful to browse existing resources and find the correct names. For example, the name for my autoscale settings is actually "Default2-WebDevResources", i.e. "{AppServicePlan}-{ResourceGroup}" which I wouldn't have expected.
There is a preview service https://resources.azure.com/ where you can inspect all your resources easily. If you search for autoscale in the UI you will easily find the settings for your resource. It will also show you how to call the relevant REST Api endpoint to read or update that resorce.
It's a great tool for revealing a lot of details for your deployed resources and it will actually give you an ARM template stub for the resource you are looking at.
And to answer your question, you could programmatically call the REST API from a client with updated settings for autoscale. The REST API is one way of doing this, the SDK another and PowerShell a third.
The guide which you're following is based on the Azure Service Management model, aka Classic mode, which is deprecated and only exists mainly for backward compatibility support.
You should use the latest
Microsoft.Azure.Insights nuget package for getting the autoscale settings.
Sample code using the nuget above is as below:
using Microsoft.Azure.Management.Insights;
using Microsoft.Rest;
//... Get necessary values for the required parameters
var client = new InsightsManagementClient(new TokenCredentials(token));
client.AutoscaleSettings.Get(resourceGroupName, autoScaleSettingName);
Besides, the autoscalesettings is a resource under the "Microsoft.Insights" provider and not under the "Microsoft.Web" provider, which explains why you are not able to find it with your serverfarm resourceId.
See the REST API Reference below for getting the autoscale settings.
GET
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/microsoft.insights/autoscaleSettings/{autoscale-setting-name}?api-version={api-version}

Cannot connect to DocumentDb directly after having deployed the DocumentDb account

I have an ARM template that I use to deploy a DocumentDB as well as other Azure reosurces to a resource group. I want my ARM template to setup a Stream Analytics job that uses the DocumentDB as output. In order to do this the DocumentDB account created by the ARM template needs to have a database and a collection setup as well. I cannot find a way to do this from an ARM template so I have written a Powershell CmdLet to create the database and collction for me.
The Stream Analytics job cannot be created by the first ARM template since it depends on having the database and collection created first. Instead I have to divide the deployment into two ARM templates, the first setting up the DocDb account and the second setting up the SA job.
The problem is that I cannot create a database in the DocDB account directly after having deployed the account via the ARM template. I get an exception with the following message: "The remote name could not be resolved: 'test.documents.azure.com'" when I try to execute the CreateDatabaseAsync method with the DocDbEndpoint and AuthKey I get back from the ARM template deployment.
Are there any timing issues after having deployed Azure resources using a ARM template before you can access them programatically? This do not seem to be a problem with other Azure reosurces created this way.
Any help on this matter is highly appreciated as well as what is a good practice for working with ARM templates with DocumentDB and Stream Analytic jobs.
Update 2016-03-23
Code for setting up the connection to the DocumentDB to create the database.
Uri endpointUri = new Uri(documentDbEndPoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
var db = await client.CreateDatabaseAsync(new Database { Id = databaseId });
return db;
Where the documentDbEndPoint is in the form of: https://name.documents.azure.com:443/ and name is the name of my DocDB account just created by the ARM template deployment.
I have the code in a library which I can either call from a Console application or from a Powershell script by loading the library with:
Add-Type -Path <path to library dll file>
No matter if I use powershell or console application I get the same error if I try to create a database just after having created the DocDB account using the ARM template. If I wait like an hour or so both the powershell script and console application works and can create a database in the account.
Seems like there is some kind of timing issue in order for Azure to setup dns records for the newly created DocDB account so that it can be accessed using the DocDB API.
Update 2 2016-03-23
Just tried to create a DocDB account directly from the portal and doing this instead of creating it from an ARM template makes it possible to create a database in the account using my powershell script and console application immediately.
This timing issue has been fixed now and you should be able to use it from the ARM template now.

Resources