I have connection string in format
jdbc:oracle:thin:#(DESCRIPTION = (LOAD_BALANCE=ON)
(ADDRESS = (PROTOCOL = tcp)(HOST = aaa)(PORT = 1531))
(ADDRESS = (PROTOCOL = tcp)(HOST = bbb)(PORT = 1526))
(ADDRESS = (PROTOCOL = tcp)(HOST = ccc)(PORT = 1526))
(ADDRESS = (PROTOCOL = tcp)(HOST = ddd)(PORT = 1526))
(CONNECT_DATA = (SERVER=dedicated)(SERVICE_NAME=a.b.org))
)
How can I use cx_Oracle connect
connection = cx_Oracle.connect( .... )
using connection string in above specified format ?
It is actually quite simple. You can do the following:
dsn = """(DESCRIPTION = (LOAD_BALANCE=ON)
(ADDRESS = (PROTOCOL = tcp)(HOST = aaa)(PORT = 1531))
(ADDRESS = (PROTOCOL = tcp)(HOST = bbb)(PORT = 1526))
(ADDRESS = (PROTOCOL = tcp)(HOST = ccc)(PORT = 1526))
(ADDRESS = (PROTOCOL = tcp)(HOST = ddd)(PORT = 1526))
(CONNECT_DATA = (SERVER=dedicated)(SERVICE_NAME=a.b.org))
)
"""
cx_Oracle.connect("user", "password", dsn)
Effectively, any connect string that you can find in a tnsnames.ora file you can also pass directly as the dsn parameter to cx_Oracle.connect.
Related
Im having some issues with coding a dynamic block for frontdoor in Terraform. I have found a good working example of one here: https://github.com/spy86/terraform-azure-front-door/blob/main/front_door.tf
Yet my frontdoor setup is not as complex as this persons and I do not need everything he has done on his.
What I am trying to achieve is to put two backend_pools on my front door to enable multiple regions. The only way to do this is to bring in dynamic blocks. Yet when I do this I am getting an error: │ Error: Unsupported attribute │ │ on frontdoor.tf line 96, in resource "azurerm_frontdoor" "jctestingfrontdoor": │ 96: for_each = var.backend_pool_settings.value.backend[*] │ ├──────────────── │ │ var.backend_pool_settings is a list of object, known only after apply │ │ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
Here is my Frontdoor code:
Main.tf
resource "azurerm_frontdoor" "jctestingfrontdoor" {
depends_on = [
azurerm_key_vault.jctestingenv_keyvault,
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.Terraform.name
routing_rule {
name = "projroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["projfrontendendpoint", "${local.frontendendpoint2}"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "projbackendpool"
}
}
backend_pool_load_balancing {
name = "projloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "projhealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
dynamic "backend_pool" {
for_each = var.backend_pool_settings[*]
content {
name = var.backend_pool_settings.name
load_balancing_name = var.backend_pool_settings.load_balancing_name
health_probe_name = var.backend_pool_settings.health_probe_name
dynamic "backend" {
for_each = var.backend_pool_settings.backend
content {
address = var.backend_pool_settings.address
host_header = var.backend_pool_settings.host_header
http_port = var.backend_pool_settings.http_port
https_port = var.backend_pool_settings.https_port
priority = var.backend_pool_settings.priority
weight = var.backend_pool_settings.weight
enabled = var.backend_pool_settings.enabled
}
}
}
}
frontend_endpoint {
name = "projfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.terraform.example")
}
}
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
depends_on = [
azurerm_frontdoor.jctestingfrontdoor
]
frontend_endpoint_id = "${azurerm_frontdoor.jctestingfrontdoor.id}/frontendEndpoints/${local.frontendendpoint2}"
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault.jctestingenv_keyvault.id
}
}
variables.tf
variable "backend_pool_settings" {
description = "backend pool stettings for frontdoor"
type = object({
name = string
backend = list(object({
address = string
host_header = string
http_port = number
https_port = number
weight = number
priority = number
enabled = bool
}))
load_balancing_name = string
health_probe_name = string
})
}
locals.tf
locals {
frontendendpoint2 = "projfrondoordnsname"
backendpool1 = "uksouth"
backendpool2 = "westeurope"
}
inputvariables.tfvars
backend_pool_settings = (
{
name = "uksouth"
backend = {
address = "portal-staging-testing1.terraform.example"
host_header = "portal-staging-testing1.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projloadbalancesettings"
},
{
name = "westeurope"
backend = {
address = "portal-staging-testing2.terraform.example"
host_header = "portal-staging-testing2.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projloadbalancesettings"
})
I have coded the variables as object lists but I'm not sure if that's the right thing to do and I'm not sure if I should be splitting the backend_pool as two dynamic blocks like in the example.
UPDATE:
After working through my code I have simplified it a bit more,
resource "azurerm_frontdoor" "jctestingfrontdoor" {
depends_on = [
azurerm_key_vault.jctestingenv_keyvault,
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.Terraform.name
routing_rule {
name = "projroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["projfrontendendpoint", "${local.frontendendpoint2}"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "projbackendpool"
}
}
backend_pool_load_balancing {
name = "projloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "projhealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
backend_pool {
name = "projbackendpool"
dynamic "backend" {
for_each = var.backend_pool_settings.value.backend[*]
content {
address = backend.address
host_header = backend.host_header
http_port = backend.http_port
https_port = backend.https_port
priority = backend.priority
weight = backend.weight
enabled = backend.enabled
}
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projhealthprobesettings"
}
frontend_endpoint {
name = "projfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.terraform.example")
}
}
Now the error im getting is: │ Error: Unsupported attribute │ │ on frontdoor.tf line 96, in resource "azurerm_frontdoor" "jctestingfrontdoor": │ 96: for_each = var.backend_pool_settings.value.backend[*] │ ├──────────────── │ │ var.backend_pool_settings is a list of object, known only after apply │ │ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
I have managed to fix this by playing about with the map variable. Basically, front door does not require the object of the backend to be specified as it already knows its building a backend. Also I played around with a few bits of other code and got this working see my code for example:
mainj.tf
resource "azurerm_frontdoor" "jctestingfrontdoor" {
depends_on = [
azurerm_key_vault.jctestingenv_keyvault,
]
name = "testingfrontdoor"
resource_group_name = azurerm_resource_group.terraform.name
routing_rule {
name = "projroutingrule"
accepted_protocols = ["Http", "Https"]
patterns_to_match = ["/*"]
frontend_endpoints = ["projfrontendendpoint", "${local.frontendendpoint2}"]
forwarding_configuration {
forwarding_protocol = "MatchRequest"
backend_pool_name = "projbackendpool"
}
}
backend_pool_load_balancing {
name = "projloadbalancesettings"
sample_size = 255
successful_samples_required = 1
}
backend_pool_health_probe {
name = "projhealthprobesettings"
path = "/health/probe"
protocol = "Https"
interval_in_seconds = 240
}
backend_pool {
name = "projbackendpool"
dynamic "backend" {
for_each = var.backend_pool_settings
content {
address = backend.value.address
host_header = backend.value.host_header
http_port = backend.value.http_port
https_port = backend.value.https_port
priority = backend.value.priority
weight = backend.value.weight
enabled = backend.value.enabled
}
}
load_balancing_name = "projloadbalancesettings"
health_probe_name = "projhealthprobesettings"
}
frontend_endpoint {
name = "projfrontendendpoint"
host_name = format("testingfrontdoor.azurefd.net")
}
frontend_endpoint {
name = local.frontendendpoint2
host_name = format("portal-staging.terraform.example")
}
}
resource "azurerm_frontdoor_custom_https_configuration" "portal_staging_https_config" {
depends_on = [
azurerm_frontdoor.jctestingfrontdoor
]
frontend_endpoint_id = "${azurerm_frontdoor.jctestingfrontdoor.id}/frontendEndpoints/${local.frontendendpoint2}"
custom_https_provisioning_enabled = true
custom_https_configuration {
certificate_source = "AzureKeyVault"
azure_key_vault_certificate_secret_name = "imported-cert"
azure_key_vault_certificate_vault_id = azurerm_key_vault.jctestingenv_keyvault.id
}
}
variables.tf
variable "backend_pool_settings" {
description = "backend pool stettings for frontdoor"
type = map(object({
address = string
host_header = string
http_port = number
https_port = number
weight = number
priority = number
enabled = bool
}))
}
inputvariables.tfvars
backend_pool_settings = {
backendone = {
address = "portal-staging-testing1.terraform.example"
host_header = "portal-staging-testing1.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
},
backendtwo = {
address = "portal-staging-testing2.terraform.example"
host_header = "portal-staging-testing2.terraform.example"
http_port = 80
https_port = 443
priority = 1
weight = 50
enabled = true
}
}
This post also helped me to figure out messing about with map objects with Terraform: https://serverfault.com/questions/1063395/terraform-values-from-tfvars-are-not-loading-when-using-multi-level-maps
i am trying to do for_each in a map(object) type for creating a vsphere vm using terraform. below is the codes that I have written.
instances.tf
resource "vsphere_virtual_machine" "vm" {
for_each = var.virtual_machines
# vm-name
name = each.key
resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
tags = [data.vsphere_tag.tag[each.key].id]
guest_id = data.vsphere_virtual_machine.template.guest_id
scsi_type = data.vsphere_virtual_machine.template.scsi_type
# guest_id = data.vsphere_virtual_machine.template[each.key].guest_id
# scsi_type = data.vsphere_virtual_machine.template[each.key].scsi_type
num_cpus = each.value.system_cores
memory = each.value.system_memory
wait_for_guest_ip_timeout = 0
wait_for_guest_net_timeout = 0
#Network
network_interface {
network_id = data.vsphere_network.network.id
adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
# adapter_type = data.vsphere_virtual_machine.template[each.key].network_interface_types[0]
}
#Storage
disk {
label = each.value.disk_label[0]
size = each.value.system_disk_size
thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
# thin_provisioned = data.vsphere_virtual_machine.template[each.key].disks.0.thin_provisioned
}
disk {
label = each.value.disk_label[1]
size = each.value.system_disk_size
unit_number = 1
thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
# thin_provisioned = data.vsphere_virtual_machine.template[each.key].disks.0.thin_provisioned
}
#cloning from template
clone {
template_uuid = data.vsphere_virtual_machine.template.id
# template_uuid = data.vsphere_virtual_machine.template[each.key].id
customize {
linux_options {
host_name = each.value.system_name
domain = each.value.system_domain
}
network_interface {
ipv4_address = each.value.system_ipv4_address
ipv4_netmask = each.value.system_ipv4_netmask
}
ipv4_gateway = each.value.system_ipv4_gateway
}
}
}
i have declared other values and there is the map(object) i have in variables.tf
variable "virtual_machines" {
type = map(object({
system_disk_size = number
system_cores = number
system_memory = number
system_ipv4_address = string
system_name = string
system_domain = string
vsphere_tag_category = string
vsphere_tag = string
disk_label = list(string)
system_ipv4_address = list(string)
}))
terraform.tfvars
virtual_machines = {
server-1 = {
system_cores = 2
system_memory = 2048
system_ipv4_address = ""
system_disk_size = 140
system_name = "terraformvm"
system_domain = "example.com"
vsphere_tag_category = "test_category"
vsphere_tag = "test_tag"
disk_label = ["disk0", "disk1"]
system_ipv4_address = ["ip1", "1p2"]
}
But i am getting the below error.
│ Error: Incorrect attribute value type
│
│ on Instances.tf line 57, in resource "vsphere_virtual_machine" "vm":
│ 57: ipv4_address = each.value.system_ipv4_address
│ ├────────────────
│ │ each.value.system_ipv4_address is list of string with 2 elements
│
│ Inappropriate value for attribute "ipv4_address": string required.
can anyone tell me how to access each value in system_ipv4_address? Thanks in advance
If you need the whole list
each.value.system_ipv4_gateway[*]
If you need the first element for example
each.value.system_ipv4_gateway[0]
This attribute requires a string, so if you need to use both the ip addresses you need to define multiple interfaces. And apply one element to each one
the error is in terraform.tfvars, the correct way is written multiples VM stack into statement, this is works for me
virtual_machines = {
server-1 = {
system_cores = 2
system_memory = 2048
system_ipv4_address = "10.10.10.1"
system_disk_size = 140
system_name = "terraformvm-1"
system_domain = "example.com"
vsphere_tag_category = "test_category"
vsphere_tag = "test_tag"
disk_label = ["disk0", "disk1"]
}
server-2 = {
system_cores = 2
system_memory = 2048
system_ipv4_address = "10.10.10.2"
system_disk_size = 140
system_name = "terraformvm-2"
system_domain = "example.com"
vsphere_tag_category = "test_category"
vsphere_tag = "test_tag"
disk_label = ["disk0", "disk1"]
}
Is there a way to use the below list in a for loop and add in the target_groups ? I am trying to use the prefix with target_groups variable in a for-loop. I have tested also for_each. The target_groups expects the list format but the for_each does not give that expected result.
variable "prefix" {
description = "NLB Prefix"
type = any
default = "test-target"
}
variable "target_groups" {
description = "NLB"
type = any
default = {
tg1 = {
name_prefix = "test"
backend_protocol = "TCP"
backend_port = 443
target_type = "ip"
deregistration_delay = 10
preserve_client_ip = true
stickiness = {
enabled = true
type = "source_ip"
}
targets = {
appl1 = {
target_id = "191.11.11.11"
port = 443
}
}
},
}
}
}
I tried the list below for_each
module "g-appl_nlb" {
source = "../../modules/compute/lb"
name = format("%s-g-appl-nlb", var.name_prefix)
load_balancer_type = "network"
vpc_id = data.aws_vpc.target_vpc.id
...
target_groups = [
for_each = var.target_groups
name_previs = var.prefix
backend_protocol = each.value["backend_protocol"]
backend_port = each.value["backend_port"]
target_type = each.value["target_type"]
deregistration_delay = each.value["deregistration_delay"]
preserve_client_ip = each.value["preserve_client_ip"]
stickiness = each.value["stickiness"]
]
....
Basically, I managed the solved my request with the below approach.
locals {
target_groups = flatten([
for tg_data in var.target_groups: {
name_prefix = "var.name_prefix"
backend_protocol = tg_data.backend_protocol
backend_port = tg_data.backend_port
target_type = tg_data.target_type
deregistration_delay = tg_data.deregistration_delay
preserve_client_ip = tg_data.preserve_client_ip
....
])
}
module "g-appl_nlb" {
source = "../../modules/compute/lb"
name = format("%s-g-appl-nlb", var.name_prefix)
load_balancer_type = "network"
vpc_id = data.aws_vpc.target_vpc.id
...
target_groups = local.target_groups
I am deploying a number of AWS application load balancers by feeding a nested map from locals.tf to a module configuring the load-balancers.
locals {
lb_vars = {
alb1 = {
load_balancer_type = application
listener_port = 443
listener_protocol = https
internal = false
subnets = var.subnet1
backends = {
backend1 = {
port = "8080"
path = ["/endpoint1/backend1*"]
protocol = "http"
protocol_version = "http1"
health_check_enabled = true
health_check_interval = 10
health_check_port = 19808
health_check_path = "/health"
health_check_protocol = "http"
},
backend2 = {
port = "8081"
path = ["/endpoint1/backend2*"]
protocol = "http"
protocol_version = "http1"
health_check_enabled = true
health_check_interval = 10
health_check_port = 19809
health_check_path = "/health"
health_check_protocol = "http"
},
alb2 = {
load_balancer_type = application
listener_port = 443
listener_protocol = https
internal = false
subnets = var.subnet1
backends = {
backend1 = {
port = "8082"
path = ["/endpoint2/backend1*"]
protocol = "http"
protocol_version = "http1"
health_check_enabled = true
health_check_interval = 10
health_check_port = 19810
health_check_path = "/health"
health_check_protocol = "http"
},
backend2 = {
port = "8083"
path = ["/endpoint2/backend2*"]
protocol = "http"
protocol_version = "http1"
health_check_enabled = true
health_check_interval = 10
health_check_port = 19811
health_check_path = "/health"
health_check_protocol = "http"
},
}
}
}
}
Resource in load-balancer module:
resource "aws_lb" "lb" {
for_each = var.lb_vars
name = "${each.key}-${var.env_name}"
internal = try(each.value.internal, "false")
load_balancer_type = try(each.value.load_balancer_type, "application")
security_groups = aws_security_group.lb_security_group[each.key]
subnets = each.value.subnets
enable_deletion_protection = false
tags = "Name" = "${each.key}-${var.env_name}"
}
As one can see, there are a number of parameters which I would like to not define for each AWS lb because typically they are the defaults, but if I remove one of the parameters I get the following error;
Error: Invalid value for module argument
The given value is not suitable for child module variable "lb_vars" defined
at lb/variables.tf:41,1-21: all map elements must have the same type.
Load-balancer Module
module "lb" {
source = "./lb"
env_name = var.env_name
full_env_name = local.full_env_name
subnet_ids = local.subnet_ids
vpc_id = data.aws_vpc.vpc.id
external_zone_id = data.aws_route53_zone.external.zone_id
common_tags = local.common_tags
env_cert_arn = data.aws_acm_certificate.wildcard_cert.arn
lb_params = local.lb_params
}
Variables.tf in load-balancer modules (line 41 as per error)
variable "lb_params" {
type = map
description = "LB parameters"
}
#Get existing subnet properties
module "subnet" {
#source = "git::git#bitbucket.org:exium-c2/azure-registery.git/az-sub"
source = "C:\\Users\\harip\\azure-registery\\az-sub"
subnet_prefix4 = var.subnet_prefix4
subnet_prefix6 = var.subnet_prefix6
rg = var.rg-name
location = var.rg-location
vnet-name = data.azurerm_virtual_network.vnet.name
routeipv4 = data.azurerm_route_table.routeipv4.id
routeipv6 = data.azurerm_route_table.routeipv6.id
mgmt-sg-id = data.azurerm_network_security_group.sg.id
nwu-sg-id = data.azurerm_network_security_group.sg1.id
}
module "nics" {
#source = "git::git#bitbucket.org:exium-c2/azure-registery.git/az-nic"
source = "C:\\Users\\harip\\azure-registery\\az-nic"
#nic-name = var.nic-name
rg-name = var.rg-name
rg-location = var.rg-location
#count= "${length(var.subnetwork-subnetid)}"
subnetwork-subnetid = module.subnet.subnetwork-subnetid
subnetwork6-subnetid = module.subnet.subnetwork6-subnetid
depends_on = [ module.subnet.subnetwork,module.subnet.subnetwork6 ]
}
module "fpm" {
#source = "git::git#bitbucket.org:exium-c2/azure-registery.git/az-compute-fpm"
source = "C:\\Users\\harip\\azure-registery\\az-compute-fpm"
#count = "${length(var.nic1-id)}"
vm-name = var.vm-name
size = var.size
user-name = var.user-name
rg-name = var.rg-name
rg-location = var.rg-location
nic1-id = module.nics.nic1-id
nic2-id = module.nics.nic2-id
}
Plan to create multiple instances and based subnets and nics so plan to use module with "dependence on" function .
error:
Error: Module does not support depends on
Please help in this. Thanks for advance.