workflow fails before runner becomes idle - terraform

I am using philips-labs/terraform-aws-github-runner.
module "runner" {
source = "philips-labs/github-runner/aws"
version = "0.39.0"
aws_region = var.region
enable_organization_runners = true
environment = var.environment
ghes_url = "<github enterprise server>"
github_app = {
id = var.github_app.id
key_base64 = var.github_app.key_base64
webhook_secret = var.github_app.webhook_secret
}
lambda_security_group_ids = var.lambda_security_group_ids
lambda_subnet_ids = var.lambda_subnet_ids
runner_binaries_syncer_lambda_zip = "${path.module}/resources/runner-binaries-syncer.zip"
runners_lambda_zip = "${path.module}/resources/runners.zip"
subnet_ids = var.subnet_ids
webhook_lambda_zip = "${path.module}/resources/webhook.zip"
vpc_id = var.vpc_id
}
GitHub Enterprise Server: v3.2.8
Workflow event type: check_run
first, there is no registered runner
trigger github actions workflow
immediately, the workflow fails with the error: No runner matching the specified labels was found: self-hosted
then, scale-up lambda says Job not queued and no runners launch
Currently, I have to set enable_job_queued_check=false and the first workflow when there is no runners will fail.
I expect that the workflows wait for the runner become prepared.
Do you have any idea about that?

Related

Custom JDBC Driver AWS Glue Connection

It seems that specifying a JDBC_DRIVER_JAR_URI connection property when defining an aws glue connection in terraform does nothing. When I test the glue connection, the cloudwatch logs show that glue is still using version 9.4 JDBC driver or postgres
resource "aws_glue_connection" "glue_connection_2" {
connection_properties = {
JDBC_DRIVER_JAR_URI = "s3://scripts/postgresql.jar"
JDBC_CONNECTION_URL = var.jdbc_connection_url
JDBC_ENGINE_VERSION = "14"
PASSWORD = var.glue_db_password
USERNAME = var.glue_db_user_name
}
name = "${local.glue_connection_name}-custom"
connection_type = "JDBC"
physical_connection_requirements {
availability_zone = var.database_availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = sort(data.aws_subnets.vpc_subnets.ids)[0]
}
}
Is it possible to specify a custom jar for aws glue connections other than creating a custom connector for it?

Google Cloud CloudSQL Instance Fails To Create using Terraform Provider With Error "Per-Product Per-Project Service Account is not found"

We're trying to deploy a Cloud SQL (MSSQL) instance using the google-beta provider with a private IP and after roughly four to five minutes it fails and throws the error "Error waiting for Create Instance: Per-Product Per-Project Service Account is not found"
I am able to create a Cloud SQL instance using the service account via the Cloud Shell CLI and manually in Console.
Has anyone encountered this before and can they provide any insights as to what may be going wrong?
If you look at the errored out resource in console, it appears to have mostly created but this error is shown.
resource "google_sql_database_instance" "cloud_sql_instance" {
provider = google-beta
name = var.cloud_sql_instance_name
region = var.gcp_region
database_version = var.cloud_sql_version
root_password = "wearenothardcodingplaceholdertest"
deletion_protection = var.delete_protection_enabled
project = var.gcp_project
settings {
tier = var.cloud_sql_compute_tier
availability_type = var.cloud_sql_availibility_type
collation = var.cloud_sql_collation
disk_autoresize = var.cloud_sql_auto_disk_resize
disk_type = var.cloud_sql_disk_type
active_directory_config {
domain = var.active_directory_domain
}
backup_configuration {
enabled = var.cloud_sql_backup_enabled
start_time = var.cloud_sql_backup_starttime
point_in_time_recovery_enabled = var.cloud_sql_pitr_enabled
transaction_log_retention_days = var.cloud_sql_log_retention_days
backup_retention_settings {
retained_backups = var.cloud_sql_backup_retention_number
retention_unit = var.cloud_sql_backup_retention_unit
}
}
ip_configuration {
ipv4_enabled = var.cloud_sql_backup_public_ip
private_network = data.google_compute_network.vpc_connection.self_link
require_ssl = var.cloud_sql_backup_require_ssl
allocated_ip_range = var.cloud_sql_ip_range_name
}
maintenance_window {
day = var.cloud_sql_patch_day
hour = var.cloud_sql_patch_hour
update_track = "stable"
}
}
}
I just ran into this issue. You need to create a Service Identity for sqladmin.googleapis.com.
resource "google_project_service_identity" "cloudsql_sa" {
provider = google-beta
project = "cool-project"
service = "sqladmin.googleapis.com"
}

Subnet is in use and cannot be deleted issue when using pulumi destroy

Due to some issue with Pulumi cli, I am using "Pulumi Azure Pipelines Task" in Azure DevOps pipeline as following:
Pulumi up pipeline (pulumi up -s dev)
Pulumi destroy pipeline (pulumi destroy -s dev)
Pulumi destroy pipeline was working fine until I created Application Gateway.
After adding Application Gateway, when Pulumi destroy pipeline is ruining, getting following error:
error: Code="InUseSubnetCannotBeDeleted" Message="Subnet ApplicationGatewaySubnet is in use by
/subscriptions/***/resourceGroups/xxx-rg/providers/Microsoft.Network/applicationGateways/xxx-appgw-agic-dev-japaneast/gatewayIPConfigurations/appgw_gateway_ipconfig
and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet." Details=[]
I understand that it takes time to delete application gateway, but should not it be taken care by Pulumi by deleting resources sequentially
i.e. delete Subnet after Application Gateway deletion is completed
How to destroy stack without causing "Xxx is in use and cannot be deleted" issue?
The workaround is commenting out whole code and then running "Pulumi up pipeline". But that is not what I want, I want to use
"Pulumi destroy pipeline" to destroy stack properly without any issue.
Update: 2021.11.18
For debugging purpose, I commented out all code except 3 simple resources:
1 Resource Group
1 Virtual Network
1 Subnet
C# Code:
//
// Added couple of extension methods to JsonElement
//
var mainRgArgs = config.RequireObject<JsonElement>(MainResourceGroupArgs);
var mainRgName = mainRgArgs.GetName();
var mainRgTags = mainRgArgs.GetTags();
var mainResourceGroup = new ResourceGroup(MainResourceGroup, new ResourceGroupArgs {
ResourceGroupName = mainRgName,
Tags = mainRgTags
});
var spokeVnetArgs = config.RequireObject<JsonElement>(SpokeVirtualNetworkArgs);
var spokeVnetName = spokeVnetArgs.GetName();
var spokeVnetAddressPrefixes = spokeVnetArgs.Get<List<string>>(AddressPrefixes);
var spokeVnetTags = spokeVnetArgs.GetTags();
var spokeVnetOutput = mainResourceGroup.Name.Apply(rgName => {
return new VirtualNetwork(SpokeVirtualNetwork, new VirtualNetworkArgs {
ResourceGroupName = rgName,
VirtualNetworkName = spokeVnetName,
AddressSpace = new AddressSpaceArgs {
AddressPrefixes = spokeVnetAddressPrefixes
},
Tags = spokeVnetTags
});
});
var spokeAksAppSubnetOutput = spokeVnetOutput.Apply(vnet => {
return Output.Tuple(mainResourceGroup.Name, vnet.Name, vnet.Urn).Apply(tuple => {
var (rgName, vnetName, vnetUniqueName) = tuple;
var subnetName = AksApplicationSubnet;
var subnets = spokeVnetArgs.GetSubnets() ?? new Dictionary<string, string>(); // GetSubnets() -> extension method
if (!subnets.ContainsKey(subnetName)) {
throw new PulumiArgumentException($"Value is not set for {nameof(subnetName)}");
}
var subnetCidr = subnets[subnetName];
return new Subnet($"{vnetUniqueName}.{subnetName}", new AzureNative.Network.SubnetArgs {
SubnetName = subnetName,
AddressPrefix = subnetCidr,
VirtualNetworkName = vnetName,
ResourceGroupName = rgName,
});
});
});
From screenshot below, you can see that the delete sequence is wrong (Virtual Network > Resource Group > Subnet):
Info:
using "Pulumi Azure Pipelines Task" (version: 1.0.13) in Azure DevOps pipeline
Pulumi backend: Azure Blob Container & Azure KeyVault
The problem is that you are creating resources inside Apply callbacks which is highly discouraged. Not only will you not see them in the original preview, but also the dependency information is not preserved, as you discovered. A better approach would be something like this:
var mainResourceGroup = new ResourceGroup(MainResourceGroup, new ResourceGroupArgs {
ResourceGroupName = mainRgName,
Tags = mainRgTags
});
// ...
var spokeVnet = new VirtualNetwork(SpokeVirtualNetwork, new VirtualNetworkArgs {
ResourceGroupName = mainResourceGroup.Name,
VirtualNetworkName = spokeVnetName,
AddressSpace = new AddressSpaceArgs {
AddressPrefixes = spokeVnetAddressPrefixes
},
Tags = spokeVnetTags
});
// ...
var spokeAksAppSubnetOutput = new Subnet($"{vnetUniqueName}.{subnetName}", new AzureNative.Network.SubnetArgs {
SubnetName = subnetName,
AddressPrefix = subnetCidr,
VirtualNetworkName = spokeVnet.Name,
ResourceGroupName = mainResourceGroup.Name,
});
Note that you can assign outputs like mainResourceGroup.Name directly to inputs like ResourceGroupName - there is an implicit conversion operator for that.
You wouldn't be able to use outputs as the first argument of resource constructors (where you set the logical name) but hopefully you don't need to.
If you need an Apply somewhere, you can declare it separately and then use the result in resource assignment.
var myNewOutput = someResource.Prop1.Apply(v => ...);
new SomeOtherResource("name, new SomeOtherResourceArgs
{
Prop2 = myNewOutput,
// ...
});
This way the dependencies are correctly preserved for deletion.
If you want to provision multiple subnets, take a look at this issue to see how to do that sequentially if needed.

how to solve: AWS codebuild always set secondary source repository to public?

I have the following codebuild
secondary_sources {
source_identifier = "AutomationSuite"
type = "BITBUCKET"
location = "https://<my repo>.git"
git_clone_depth = 1
report_build_status = true
insecure_ssl = false
git_submodules_config {
fetch_submodules = true
}
}
After applying I always see in the AWS codebuild console
How can I change this behaviour to repository in my bitbucket account?

GitHub webhook created twice when using Terraform aws_codebuild_webhook

I'm creating a an AWS CodeBuild using the following (partial) Terraform Configuration:
resource "aws_codebuild_webhook" "webhook" {
project_name = "${aws_codebuild_project.abc-web-pull-build.name}"
branch_filter = "master"
}
resource "github_repository_webhook" "webhook" {
name = "web"
repository = "${var.github_repo}"
active = true
events = ["pull_request"]
configuration {
url = "${aws_codebuild_webhook.webhook.payload_url}"
content_type = "json"
insecure_ssl = false
secret = "${aws_codebuild_webhook.webhook.secret}"
}
}
for some reason two Webhooks are created on GitHub for that spoken project, one with events pull_request and push, and the second with pull request (the only one I've expected).
I've tried removing the first block (aws_codebuild_webhook) even though terraform documentation give an example with both:
https://www.terraform.io/docs/providers/aws/r/codebuild_webhook.html
but than I'm in a pickle because there isn't a way to acquire the payload_url the Webhook require and currently accept it from aws_codebuild_webhook.webhook.payload_url.
not sure what is the right approach here, Appreciate any suggestion.

Resources