Google Cloud CloudSQL Instance Fails To Create using Terraform Provider With Error "Per-Product Per-Project Service Account is not found" - terraform

We're trying to deploy a Cloud SQL (MSSQL) instance using the google-beta provider with a private IP and after roughly four to five minutes it fails and throws the error "Error waiting for Create Instance: Per-Product Per-Project Service Account is not found"
I am able to create a Cloud SQL instance using the service account via the Cloud Shell CLI and manually in Console.
Has anyone encountered this before and can they provide any insights as to what may be going wrong?
If you look at the errored out resource in console, it appears to have mostly created but this error is shown.
resource "google_sql_database_instance" "cloud_sql_instance" {
provider = google-beta
name = var.cloud_sql_instance_name
region = var.gcp_region
database_version = var.cloud_sql_version
root_password = "wearenothardcodingplaceholdertest"
deletion_protection = var.delete_protection_enabled
project = var.gcp_project
settings {
tier = var.cloud_sql_compute_tier
availability_type = var.cloud_sql_availibility_type
collation = var.cloud_sql_collation
disk_autoresize = var.cloud_sql_auto_disk_resize
disk_type = var.cloud_sql_disk_type
active_directory_config {
domain = var.active_directory_domain
}
backup_configuration {
enabled = var.cloud_sql_backup_enabled
start_time = var.cloud_sql_backup_starttime
point_in_time_recovery_enabled = var.cloud_sql_pitr_enabled
transaction_log_retention_days = var.cloud_sql_log_retention_days
backup_retention_settings {
retained_backups = var.cloud_sql_backup_retention_number
retention_unit = var.cloud_sql_backup_retention_unit
}
}
ip_configuration {
ipv4_enabled = var.cloud_sql_backup_public_ip
private_network = data.google_compute_network.vpc_connection.self_link
require_ssl = var.cloud_sql_backup_require_ssl
allocated_ip_range = var.cloud_sql_ip_range_name
}
maintenance_window {
day = var.cloud_sql_patch_day
hour = var.cloud_sql_patch_hour
update_track = "stable"
}
}
}

I just ran into this issue. You need to create a Service Identity for sqladmin.googleapis.com.
resource "google_project_service_identity" "cloudsql_sa" {
provider = google-beta
project = "cool-project"
service = "sqladmin.googleapis.com"
}

Related

Login error for admin SQL Server during terraform plan

I'm building an Azure infrastructure with terraform. I need to create a specific user of the DB for each DB in the server. To create the users I use the provider "betr-io / mssql", to create the users I use the following script:
resource "mssql_login" "sql_login" {
server {
host = "${var.sql_server_name}.database.windows.net"
# host = azurerm_mssql_server.sqlserver.fully_qualified_domain_name
login {
username = var.sql_admin_user
password = var.sql_admin_psw
}
}
login_name = var.sql_dbuser_username
password = var.sql_dbuser_password
depends_on = [azurerm_mssql_server.sqlserver, azurerm_mssql_database.sqldb]
}
resource "mssql_user" "sql_user" {
server {
host = "${var.sql_server_name}.database.windows.net"
# host = azurerm_mssql_server.sqlserver.fully_qualified_domain_name
login {
username = var.sql_admin_user
password = var.sql_admin_psw
}
}
username = var.sql_dbuser_username
password = var.sql_dbuser_password
database = var.sql_db_name
roles = var.sql_dbuser_roles
depends_on = [azurerm_mssql_server.sqlserver, azurerm_mssql_database.sqldb, mssql_login.sql_login]
}
What the terraform plan gives me is this error
Error: unable to read user [sqldb-dev].[dbuser]: login error: mssql: Login failed for user 'usr-admin'.
with mssql_user.sql_user,
on main.tf line 346, in resource "mssql_user" "sql_user":
346: resource "mssql_user" "sql_user" {
I can't understand the problem where it might come from, has anyone had a similar experience?
For completeness of information, the databases are hosted in an elastic pool instance.
The only solution I have found is to destroy the users and recreate them with the databases.
Unfortunately I haven't found a way to add devops to the sql server whitelist.

workflow fails before runner becomes idle

I am using philips-labs/terraform-aws-github-runner.
module "runner" {
source = "philips-labs/github-runner/aws"
version = "0.39.0"
aws_region = var.region
enable_organization_runners = true
environment = var.environment
ghes_url = "<github enterprise server>"
github_app = {
id = var.github_app.id
key_base64 = var.github_app.key_base64
webhook_secret = var.github_app.webhook_secret
}
lambda_security_group_ids = var.lambda_security_group_ids
lambda_subnet_ids = var.lambda_subnet_ids
runner_binaries_syncer_lambda_zip = "${path.module}/resources/runner-binaries-syncer.zip"
runners_lambda_zip = "${path.module}/resources/runners.zip"
subnet_ids = var.subnet_ids
webhook_lambda_zip = "${path.module}/resources/webhook.zip"
vpc_id = var.vpc_id
}
GitHub Enterprise Server: v3.2.8
Workflow event type: check_run
first, there is no registered runner
trigger github actions workflow
immediately, the workflow fails with the error: No runner matching the specified labels was found: self-hosted
then, scale-up lambda says Job not queued and no runners launch
Currently, I have to set enable_job_queued_check=false and the first workflow when there is no runners will fail.
I expect that the workflows wait for the runner become prepared.
Do you have any idea about that?

Subnet is in use and cannot be deleted issue when using pulumi destroy

Due to some issue with Pulumi cli, I am using "Pulumi Azure Pipelines Task" in Azure DevOps pipeline as following:
Pulumi up pipeline (pulumi up -s dev)
Pulumi destroy pipeline (pulumi destroy -s dev)
Pulumi destroy pipeline was working fine until I created Application Gateway.
After adding Application Gateway, when Pulumi destroy pipeline is ruining, getting following error:
error: Code="InUseSubnetCannotBeDeleted" Message="Subnet ApplicationGatewaySubnet is in use by
/subscriptions/***/resourceGroups/xxx-rg/providers/Microsoft.Network/applicationGateways/xxx-appgw-agic-dev-japaneast/gatewayIPConfigurations/appgw_gateway_ipconfig
and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet." Details=[]
I understand that it takes time to delete application gateway, but should not it be taken care by Pulumi by deleting resources sequentially
i.e. delete Subnet after Application Gateway deletion is completed
How to destroy stack without causing "Xxx is in use and cannot be deleted" issue?
The workaround is commenting out whole code and then running "Pulumi up pipeline". But that is not what I want, I want to use
"Pulumi destroy pipeline" to destroy stack properly without any issue.
Update: 2021.11.18
For debugging purpose, I commented out all code except 3 simple resources:
1 Resource Group
1 Virtual Network
1 Subnet
C# Code:
//
// Added couple of extension methods to JsonElement
//
var mainRgArgs = config.RequireObject<JsonElement>(MainResourceGroupArgs);
var mainRgName = mainRgArgs.GetName();
var mainRgTags = mainRgArgs.GetTags();
var mainResourceGroup = new ResourceGroup(MainResourceGroup, new ResourceGroupArgs {
ResourceGroupName = mainRgName,
Tags = mainRgTags
});
var spokeVnetArgs = config.RequireObject<JsonElement>(SpokeVirtualNetworkArgs);
var spokeVnetName = spokeVnetArgs.GetName();
var spokeVnetAddressPrefixes = spokeVnetArgs.Get<List<string>>(AddressPrefixes);
var spokeVnetTags = spokeVnetArgs.GetTags();
var spokeVnetOutput = mainResourceGroup.Name.Apply(rgName => {
return new VirtualNetwork(SpokeVirtualNetwork, new VirtualNetworkArgs {
ResourceGroupName = rgName,
VirtualNetworkName = spokeVnetName,
AddressSpace = new AddressSpaceArgs {
AddressPrefixes = spokeVnetAddressPrefixes
},
Tags = spokeVnetTags
});
});
var spokeAksAppSubnetOutput = spokeVnetOutput.Apply(vnet => {
return Output.Tuple(mainResourceGroup.Name, vnet.Name, vnet.Urn).Apply(tuple => {
var (rgName, vnetName, vnetUniqueName) = tuple;
var subnetName = AksApplicationSubnet;
var subnets = spokeVnetArgs.GetSubnets() ?? new Dictionary<string, string>(); // GetSubnets() -> extension method
if (!subnets.ContainsKey(subnetName)) {
throw new PulumiArgumentException($"Value is not set for {nameof(subnetName)}");
}
var subnetCidr = subnets[subnetName];
return new Subnet($"{vnetUniqueName}.{subnetName}", new AzureNative.Network.SubnetArgs {
SubnetName = subnetName,
AddressPrefix = subnetCidr,
VirtualNetworkName = vnetName,
ResourceGroupName = rgName,
});
});
});
From screenshot below, you can see that the delete sequence is wrong (Virtual Network > Resource Group > Subnet):
Info:
using "Pulumi Azure Pipelines Task" (version: 1.0.13) in Azure DevOps pipeline
Pulumi backend: Azure Blob Container & Azure KeyVault
The problem is that you are creating resources inside Apply callbacks which is highly discouraged. Not only will you not see them in the original preview, but also the dependency information is not preserved, as you discovered. A better approach would be something like this:
var mainResourceGroup = new ResourceGroup(MainResourceGroup, new ResourceGroupArgs {
ResourceGroupName = mainRgName,
Tags = mainRgTags
});
// ...
var spokeVnet = new VirtualNetwork(SpokeVirtualNetwork, new VirtualNetworkArgs {
ResourceGroupName = mainResourceGroup.Name,
VirtualNetworkName = spokeVnetName,
AddressSpace = new AddressSpaceArgs {
AddressPrefixes = spokeVnetAddressPrefixes
},
Tags = spokeVnetTags
});
// ...
var spokeAksAppSubnetOutput = new Subnet($"{vnetUniqueName}.{subnetName}", new AzureNative.Network.SubnetArgs {
SubnetName = subnetName,
AddressPrefix = subnetCidr,
VirtualNetworkName = spokeVnet.Name,
ResourceGroupName = mainResourceGroup.Name,
});
Note that you can assign outputs like mainResourceGroup.Name directly to inputs like ResourceGroupName - there is an implicit conversion operator for that.
You wouldn't be able to use outputs as the first argument of resource constructors (where you set the logical name) but hopefully you don't need to.
If you need an Apply somewhere, you can declare it separately and then use the result in resource assignment.
var myNewOutput = someResource.Prop1.Apply(v => ...);
new SomeOtherResource("name, new SomeOtherResourceArgs
{
Prop2 = myNewOutput,
// ...
});
This way the dependencies are correctly preserved for deletion.
If you want to provision multiple subnets, take a look at this issue to see how to do that sequentially if needed.

How do I connect azure sql database to function app in terraform

I am trying to connect sql database to function app on azure.
I tried using "storage_connection_string" key in terraform.It is still not working.
Could someone please help on the issue
I have a Function App deployed into Azure that's also using Azure SQL as well as a storage container. This is how it works for me. My terraform configuration is module-based so my modules for the database and storage accounts are separate, and they pass the required connection strings to my function app module:
resource "azurerm_function_app" "functions" {
name = "fcn-${var.environment}
resource_group_name = "${var.resource_group}"
location = "${var.resource_location}"
app_service_plan_id = "${var.appservice_id}"
storage_connection_string = "${var.storage_prim_conn_string}"
https_only = true
connection_string {
name = "SqlAzureDbConnectionString"
type = "SQLAzure"
value = "${var.fcn_connection_string}"
}
tags {
environment = "${var.environment}"
}
Just remember to check you have the module outputs as well as the variables in place.
Hope that helps.

How to deploy multiple virtual machines to single cloud service with the new Resource Manager model programmatically?

Curently I am deploying multiple VMs to single cloud service using this code:
private async Task CreateVirtualMachine()
{
DeploymentGetResponse deploymentResponse = await _computeManagementClient.Deployments.GetBySlotAsync("myservicename", DeploymentSlot.Production);
if (deploymentResponse == null)
{
var parameters = new VirtualMachineCreateDeploymentParameters
{
DeploymentSlot = DeploymentSlot.Production,
Name = "mservicename",
Label = "myservicename"
};
parameters.Roles.Add(new Role
{
OSVirtualHardDisk = new OSVirtualHardDisk
{
HostCaching = VirtualHardDiskHostCaching.ReadWrite,
SourceImageName = "imagename"
},
RoleName = "vmname",
RoleType = VirtualMachineRoleType.PersistentVMRole.ToString(),
RoleSize = VirtualMachineRoleSize.Small,
ProvisionGuestAgent = true
});
parameters.Roles[0].ConfigurationSets.Add(new ConfigurationSet
{
ComputerName = "vmname",
ConfigurationSetType = ConfigurationSetTypes.LinuxProvisioningConfiguration,
HostName = "vmname",
AdminUserName = "adminusername",
AdminPassword = "adminpass",
UserName = "username",
UserPassword = "userpass",
DisableSshPasswordAuthentication = false,
});
parameters.Roles[0].ConfigurationSets.Add(new ConfigurationSet
{
ConfigurationSetType = ConfigurationSetTypes.NetworkConfiguration,
InputEndpoints = new List<InputEndpoint>()
{
new InputEndpoint()
{
Name = "HTTP",
Protocol = InputEndpointTransportProtocol.Tcp,
LocalPort = 80,
Port = 80
}
}
});
var response = await _computeManagementClient.VirtualMachines.CreateDeploymentAsync("mservicename", parameters);
}
else
{
var createParameters = new VirtualMachineCreateParameters
{
OSVirtualHardDisk = new OSVirtualHardDisk
{
HostCaching = VirtualHardDiskHostCaching.ReadWrite,
SourceImageName = "imagename"
},
RoleName = "vmname",
RoleSize = VirtualMachineRoleSize.Small,
ProvisionGuestAgent = true,
ConfigurationSets = new List<ConfigurationSet>
{
new ConfigurationSet
{
ComputerName = "vmname",
ConfigurationSetType = ConfigurationSetTypes.LinuxProvisioningConfiguration,
HostName = "vmname",
AdminUserName = "adminusername",
AdminPassword = "adminpass",
UserName = "username",
UserPassword = "userpass",
DisableSshPasswordAuthentication = false
},
new ConfigurationSet
{
ConfigurationSetType = ConfigurationSetTypes.NetworkConfiguration,
InputEndpoints = new List<InputEndpoint>()
{
new InputEndpoint()
{
Name = "HTTP",
Protocol = InputEndpointTransportProtocol.Tcp,
LocalPort = 81,
Port = 81
}
}
}
}
};
var responseCreate = await _computeManagementClient.VirtualMachines.CreateAsync("mservicename", deploymentResponse.Name, createParameters);
}
}
How can this be done using new Resource Manager? I am working in Visual Studio 2015, MVC application.
Problem is that when deploying multiple VMs to cloud service, all of VMs have same domain/ip. But I want that every VM has it's own domain. I heard that this can be done with Resource Manager, but realy dont know exactly what Resource Manager is, or how to use it.
Also, I am aware that deploying each VM to single cloud service would give unique domain name to each VM, but that means I must create new cloud service for every virtual machine, and what I really need is to deploy multiple vm's to single cloud service.
Can this be done with Resource management?
The new resource manager model has no concept of cloud services, you simply deploy a VM into a virtual network and connect it directly to the internet via a public IP address. You can then attach any domain name you want to that public IP.
It is impossible to create a cloud service within resource management.
As per the previous answer, the cloud service concept doesn't exist for the Resource Manager deployment model. This article provides an overview of some of the ways you can deploy multiple virtual machines in the same resource group: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-multiple-vms/

Resources