Not getting any external outbound traffic from nodes in a Azure VM Scale Set behind a loadbalancer - azure

I'm experiencing difficulties regarding accessing external resources from nodes (RHEL) configured in a VM Scale Set.
To sketch the environment I'm trying to describe using Azure Resource Manager Templates, I'm looking to create:
1 common virtualNetwork
1 Frontend VM (running RHEL, and is working as intended)
1 Cluster (vmss) running 2 nodes (RHEL)
Nodes are spawned in the same private subnet as the frontend VM
1 loadbalancer should work as a NAT gateway (but it's not working this way)
The loadbalancer has an external IP, inboundNatPool (which works), backendAddressPool (in which nodes are successfully registered)
the Network Security Group manages access to ports (set to allow all outbound connections)
As a footnote, I'm comfortable writing up AWS cloudformation files in YAML, so I'm handling Azure Resource Manager Templates in a similar way, for the sake of readability and the added functionality of adding comments in my template.
An Example of my vmss config (short snippet)
... #(yaml-template is first converted to json and than deployed using the azure cli)
# Cluster
# -------
# Scale Set
# ---------
# | VM Scale Set can not connect to external sources
# |
- type: Microsoft.Compute/virtualMachineScaleSets
name: '[variables(''vmssName'')]'
location: '[resourceGroup().location]'
apiVersion: '2017-12-01'
dependsOn:
- '[variables(''vnetName'')]'
- '[variables(''loadBalancerName'')]'
- '[variables(''networkSecurityGroupName'')]'
sku:
capacity: '[variables(''instanceCount'')]' # Amount of nodes to be spawned
name: Standard_A2_v2
tier: Standard
# zones: # If zone is specified, no sku can be chosen
# - '1'
properties:
overprovision: 'true'
upgradePolicy:
mode: Manual
virtualMachineProfile:
networkProfile:
networkInterfaceConfigurations:
- name: '[variables(''vmssNicName'')]'
properties:
ipConfigurations:
- name: '[variables(''ipConfigName'')]'
properties:
loadBalancerBackendAddressPools:
- id: '[variables(''lbBackendAddressPoolsId'')]'
loadBalancerInboundNatPools:
- id: '[variables(''lbInboundNatPoolsId'')]'
subnet:
id: '[variables(''subnetId'')]'
primary: true
networkSecurityGroup:
id: '[variables(''networkSecurityGroupId'')]'
osProfile:
computerNamePrefix: '[variables(''vmssName'')]'
adminUsername: '[parameters(''sshUserName'')]'
# adminPassword: '[parameters(''adminPassword'')]'
linuxConfiguration:
disablePasswordAuthentication: True
ssh:
publicKeys:
- keyData: '[parameters(''sshPublicKey'')]'
path: '[concat(''/home/'',parameters(''sshUserName''),''/.ssh/authorized_keys'')]'
storageProfile:
imageReference: '[variables(''clusterImageReference'')]'
osDisk:
caching: ReadWrite
createOption: FromImage
...
The Network Security Group referenced from the template above is:
# NetworkSecurityGroup
# --------------------
- type: Microsoft.Network/networkSecurityGroups
name: '[variables(''networkSecurityGroupName'')]'
apiVersion: '2017-10-01'
location: '[resourceGroup().location]'
properties:
securityRules:
- name: remoteConnection
properties:
priority: 101
access: Allow
direction: Inbound
protocol: Tcp
description: Allow SSH traffic
sourceAddressPrefix: '*'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '22'
- name: allow_outbound_connections
properties:
description: This rule allows outbound connections
priority: 200
access: Allow
direction: Outbound
protocol: '*'
sourceAddressPrefix: 'VirtualNetwork'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '*'
And the loadbalancer, where I assume the error should be, is described as:
# Loadbalancer as NatGateway
# --------------------------
- type: Microsoft.Network/loadBalancers
name: '[variables(''loadBalancerName'')]'
apiVersion: '2017-10-01'
location: '[resourceGroup().location]'
sku:
name: Standard
dependsOn:
- '[variables(''natIPAddressName'')]'
properties:
backendAddressPools:
- name: '[variables(''lbBackendPoolName'')]'
frontendIPConfigurations:
- name: LoadBalancerFrontEnd
properties:
publicIPAddress:
id: '[variables(''natIPAddressId'')]'
inboundNatPools:
- name: '[variables(''lbNatPoolName'')]'
properties:
backendPort: '22'
frontendIPConfiguration:
id: '[variables(''frontEndIPConfigID'')]'
frontendPortRangeStart: '50000'
frontendPortRangeEnd: '50099'
protocol: tcp
I keep reading articles about configuring a SNAT with port masquerading, but I'm missing relevant examples of such setup.
Any help is greatly appreciated.

It took a lot of searching but the article from Azure about Azure Load Balancer outbound Connections (Scenario #2) stated a load-balancing rule (and complementary Health Probe) was necessary for SNAT to function.
the new code for the load balancer became:
...
- type: Microsoft.Network/loadBalancers
name: '[variables(''loadBalancerName'')]'
apiVersion: '2017-10-01'
location: '[resourceGroup().location]'
sku:
name: Standard
dependsOn:
- '[variables(''natIPAddressName'')]'
properties:
backendAddressPools:
- name: '[variables(''lbBackendPoolName'')]'
frontendIPConfigurations:
- name: LoadBalancerFrontEnd
properties:
publicIPAddress:
id: '[variables(''natIPAddressId'')]'
probes: # Needed for loadBalancingRule to work
- name: '[variables(''lbProbeName'')]'
properties:
protocol: Tcp
port: 22
intervalInSeconds: 5
numberOfProbes: 2
loadBalancingRules: # Needed for SNAT to work
- name: '[concat(variables(''loadBalancerName''),''NatRule'')]'
properties:
disableOutboundSnat: false
frontendIPConfiguration:
id: '[variables(''frontEndIPConfigID'')]'
backendAddressPool:
id: '[variables(''lbBackendAddressPoolsId'')]'
probe:
id: '[variables(''lbProbeId'')]'
protocol: tcp
frontendPort: 80
backendPort: 80
...

Related

Can't connect externally to Neo4j server in AKS cluster using Neo4j browser

I have an AKS cluster with a Node.js server connecting to a Neo4j-standalone instance all deployed with Helm.
I installed an ingress-nginx controller, referenced a default Let's Encrypt certificate and habilitated TPC ports with Terraform as
resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
# repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "tcp.7687"
value = "default/cluster:7687"
}
set {
name = "tcp.7474"
value = "default/cluster:7474"
}
set {
name = "tcp.7473"
value = "default/cluster:7473"
}
set {
name = "tcp.6362"
value = "default/cluster-admin:6362"
}
set {
name = "tcp.7687"
value = "default/cluster-admin:7687"
}
set {
name = "tcp.7474"
value = "default/cluster-admin:7474"
}
set {
name = "tcp.7473"
value = "default/cluster-admin:7473"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
I then have an Ingress with paths pointing to Neo4j services so on https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ I can get to the browser.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
# nginx.ingress.kubernetes.io/rewrite-target: /
# certmanager.k8s.io/acme-challenge-type: http01
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
ingress.kubernetes.io/ssl-redirect: "true"
# kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- xxxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
# - host: xxx.westeurope.cloud.app.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
# 502 bad gateway
# /any character 502 bad gatway
- path: /neo4j-tcp-bolt(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
# /browser/ show browser
#/any character shows login to xxx.westeurope.cloudapp.azure.com:443 from https, :80 from http
- path: /neo4j-tcp-http(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-http
number: 7474
- http:
paths:
- path: /neo4j-tcp-https(/|$)(.*)
# 502 bad gateway
# /any character 502 bad gatway
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-https
number: 7473
I can get to the Neo4j Browser on the https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ but using the Connect Url bolt+s//server.bolt it won't connect to the server with the error ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver..
Now I'm guessing that is because Neo4j bolt connector is not using the Certificate used by the ingress-nginxcontroller.
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe secret tls-secret
Name: tls-secret
Namespace: default
Labels: controller.cert-manager.io/fao=true
Annotations: cert-manager.io/alt-names: xxx.westeurope.cloudapp.azure.com
cert-manager.io/certificate-name: tls-certificate
cert-manager.io/common-name: xxx.westeurope.cloudapp.azure.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group:
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-issuer
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 5648 bytes
tls.key: 1679 bytes
I tried to use it overriding the chart values, but then the Neo4j driver from Node.js server won't connect to the server ..
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret # we set up the template to grab `private.key` from this secret
subPath: tls.key # we specify the privateKey value name to get from the secret
publicCertificate:
secretName: tls-secret # we set up the template to grab `public.crt` from this secret
subPath: tls.crt # we specify the publicCertificate value name to get from the secret
trustedCerts:
sources: [ ] # a sources array for a projected volume - this allows someone to (relatively) easily mount multiple public certs from multiple secrets for example.
revokedCerts:
sources: [ ] # a sources array for a projected volume
https:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
Is there a way to use it or should I setup another certificate just for Neo4j? If so what would it be the dnsNames to set on it?
Is there something else I'm doing wrong?
Thank you very much.
From what I can gather from your information, the problem seems to be that you're trying to expose the bolt port behind an ingress. Ingresses are implemented as an L7 (protocol aware) reverse proxy and manage load-balancing etc. The bolt protocol has its load balancing and routing for cluster applications. So you will need to expose the network service directly for every instance of neo4j you are running.
Check out this part of the documentation for more information:
https://neo4j.com/docs/operations-manual/current/kubernetes/accessing-neo4j/#access-outside-k8s
Finally after a few days of going in circles I found what the problems were..
First using a Staging certificate will cause Neo4j bolt connection to fail, as it's not Trusted, with error:
ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
found here https://grishagin.com/neo4j/2022/03/29/neo4j-websocket-issue.html
Then I was missing to assign a general listening address to the bolt connector as it's listening by default only to 127.0.0.0:7687 https://neo4j.com/docs/operations-manual/current/configuration/connectors/
To listen for Bolt connections on all network interfaces (0.0.0.0)
so I added server.bolt.listen_address: "0.0.0.0:7687" to Neo4j chart values config.
Next, as I'm connecting the default neo4j ClusterIP service tcp ports to the ingress controller's exposed TCP connections through the Ingress as described here https://neo4j.com/labs/neo4j-helm/1.0.0/externalexposure/ as an alternative to using a LoadBalancer, the Neo4j LoadBalancer services is not needed so the services:neo4j:enabled gets set to "false", in my tests I actually found that if you leave it enabled bolt won't connect despite setting everything correctly..
Other Neo4j missing config where server.bolt.enabled : "true", server.bolt.tls_level: "REQUIRED", dbms.ssl.policy.bolt.client_auth: "NONE" and dbms.ssl.policy.bolt.enabled: "true" the complete list of config options is here https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
Neo4j chart's values for ssl config were fine.
So now I can use the (renamed for brevity) path /neo4j/browser/ to serve the Neo4j Browser app, and either the /bolt path as the browser Connect URL, or PublicIP's <DSN>:<bolt port>.
You are connected as user neo4j
to bolt+s://xxxx.westeurope.cloudapp.azure.com/bolt
Connection credentials are stored in your web browser.
Hope this explanation and the code recap below will help others.
Cheers.
ingress controller
resource "helm_release" "nginx" {
name = "ingress-nginx"
namespace = "default"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
set {
name = "version"
value = "4.4.2"
}
### expose tcp connections for neo4j service
### bolt url connection port
set {
name = "tcp.7687"
value = "default/neo4j:7687"
}
### http browser app port
set {
name = "tcp.7474"
value = "default/neo4j:7474"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
spec:
ingressClassName: nginx
tls:
- hosts:
- xxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
- path: /bolt(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
- path: /neo4j(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-http
number: 7474
Values.yaml (Umbrella chart)
neo4j-db: #chart dependency alias
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "xxxx" # this will be the label: app: value for the service selector
password: "xxxxx"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser, this will create "cluster-neo4j" svc
neo4j:
enabled: false
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ] # a sources array for a projected volume

keda: Azure Service bus scaled object not scaling deployment-Scaling is not performed because triggers are not active

Trying to autoscale pod with inbound messages from azure service bus using KEDA. Scaled object defined is
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: main-router-scaledobject
namespace: rehmannazar-camel-dev
spec:
minReplicaCount: 0
maxReplicaCount: 10
scaleTargetRef:
name: mainrouter
kind: Deployment
triggers:
- type: azure-servicebus
metadata:
topicName: topic4test
subscriptionName: sub3
messageCount: "10"
activationMessageCount: "0"
authenticationRef:
name: trigger-auth-service*
with trigger-auth-service defined as
apiVersion: keda.sh/v1alpha1
*kind: TriggerAuthentication
metadata:
name: trigger-auth-service
spec:
secretTargetRef:
- parameter: connection
name: connectionsecret
key: connection*
and connectionsecret defines connection string to azure service bus.
kubectl describe scaledobject main-router-scaledobject
is having status
Status:
Conditions:
Message: ScaledObject is defined correctly and is ready for scaling
Reason: ScaledObjectReady
Status: True
Type: Ready
Message: Scaling is not performed because triggers are not active
Reason: ScalerNotActive
Status: False
Type: Active
Message: No fallbacks are active on this scaled object
Reason: NoFallbackFound
Status: False
Type: Fallback
External Metric Names:
s0-azure-servicebus-topic4test
Health:
s0-azure-servicebus-topic4test:
Number Of Failures: 0
Status: Happy
Hpa Name: keda-hpa-main-router-scaledobject
Original Replica Count: 1
Scale Target GVKR:
Group: apps
Kind: Deployment
Resource: deployments
Version: v1
Scale Target Kind: apps/v1.Deployment
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal KEDAScaleTargetDeactivated 3m53s (x191 over 98m) keda-operator Deactivated apps/v1.Deployment rehmannazar-camel-dev/mainrouter from 1 to 0
kubectl get ScaledObject main-router-scaledobject
NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE
main-router-scaledobject apps/v1.Deployment mainrouter 0 10 azure-servicebus trigger-auth-service True False False 101m
yet pods are not scaled to zero and when posting messages on subscription sub3 pods are not scaled. Pods are also not downscaled to zero when sub3 has no messages. There is always a single pod in running state. Only activity I am observing is pods getting terminated and new pod getting started but pods replicas always remain 1. Is there something I missed in keda configuration?.
KEDA configuration is working . Issue was keda integration with camel-k.

Files in AIRFLOW_HOME (which is an Azure File Share MOUNT) are created as root

I have set up the airflow in Azure Cloud (Azure Container Apps) and attached an Azure File Share as an external mount/volume
1. I ran **airflow init service**, it had created the airflow.cfg and `'webserver_config.py'` file in the **AIRFLOW_HOME (/opt/airflow)**, which is actually an azure mounted file system
2. I ran **airflow webserver service**, it had created the `airflow-webserver.pid` file in the **AIRFLOW_HOME (/opt/airflow)**, which is actually an azure mounted file system
Now the problem is all the files created above are created with root user&groups, not as airflow user(50000),
I have also set the env variable AIRFLOW_UID to 50000 during the creation of the container app. due to this my webservers are not starting, throwing the below error
PermissionError: [Errno 1] Operation not permitted: '/opt/airflow/airflow-webserver.pid'
Note: Azure Containers Apps does not allow use root/sudo commands, otherwise I could solve this problem with simple chown commands
Another problem is the airflow configurations passed through environment variables are never picked up by Docker, eg
- name: AIRFLOW__API__AUTH_BACKENDS
value: 'airflow.api.auth.backend.basic_auth'
Attached screenshot for reference
Your help is much appreciated!
YAML file that I use to create my container app:
id: /subscriptions/1234/resourceGroups/<my-res-group>/providers/Microsoft.App/containerApps/<app-name>
identity:
type: None
location: eastus2
name: webservice
properties:
configuration:
activeRevisionsMode: Single
registries: []
managedEnvironmentId: /subscriptions/1234/resourceGroups/<my-res-group>/providers/Microsoft.App/managedEnvironments/container-app-env
template:
containers:
- command:
- /bin/bash
- -c
- exec /entrypoint airflow webserver
env:
- name: AIRFLOW__API__AUTH_BACKENDS
value: 'airflow.api.auth.backend.basic_auth'
- name: AIRFLOW__CELERY__BROKER_URL
value: redis://:#myredis.redis.cache.windows.net:6379/0
- name: AIRFLOW__CELERY__RESULT_BACKEND
value: db+postgresql://user:pass#postres-db-servconn.postgres.database.azure.com/airflow?sslmode=require
- name: AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION
value: 'true'
- name: AIRFLOW__CORE__EXECUTOR
value: CeleryExecutor
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
value: postgresql+psycopg2://user:pass#postres-db-servconn.postgres.database.azure.com/airflow?sslmode=require
- name: AIRFLOW__DATABASE__SQL_ALCHEMY_CONN
value: postgresql+psycopg2://user:pass#postres-db-servconn.postgres.database.azure.com/airflow?sslmode=require
- name: AIRFLOW__CORE__LOAD_EXAMPLES
value: 'false'
- name: AIRFLOW_UID
value: 50000
image: docker.io/apache/airflow:latest
name: wsr
volumeMounts:
- volumeName: randaf-azure-files-volume
mountPath: /opt/airflow
probes: []
resources:
cpu: 0.25
memory: 0.5Gi
scale:
maxReplicas: 3
minReplicas: 1
volumes:
- name: randaf-azure-files-volume
storageName: randafstorage
storageType: AzureFile
resourceGroup: RAND
tags:
tagname: ws-only
type: Microsoft.App/containerApps

Error in code to create a Google Virtual Machine that supports Android Studio

Google labs provided me the next code as a .yaml file to create a virtual machine that allow me to run Android studio and get some info from the Android studio emulator API to logstash / elasticsearch.
To be honest I´m new and not sure why I´m getting the error. I think is related to text in '' and/or {} but not sure.
Any help more than apreciate!!!
resources:
- type: gcp-types/compute-v1:instances
name: dev-vm
properties:
zone: {{ properties["zone"] }}
machineType: zones/{{ properties["zone"] }}/machineTypes/n1-highmem-4
disks:
- boot: true
autoDelete: true
initializeParams:
sourceImage: projects/qwiklabs-resources/global/images/vnc-android-pie-tensorflow
networkInterfaces:
- network: global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
serviceAccounts:
- email: default
scopes:
- 'https://www.googleapis.com/auth/cloud.useraccounts.readonly'
- 'https://www.googleapis.com/auth/devstorage.read_only'
- 'https://www.googleapis.com/auth/logging.write'
- 'https://www.googleapis.com/auth/monitoring.write'
tags:
items:
- "vnc"
metadata:
items:
- key: startup-script-url
value: https://storage.googleapis.com/cloud-training/gsp296/setup.sh
- name: vnc-firewall-rule
type: compute.v1.firewall
properties:
sourceRanges: [0.0.0.0/0]
allowed:
- IPProtocol: tcp
ports:
- 5901
- 6901

How to redirect HTTP to HTTPS using GCP load balancer

I'm setting up my load balancer in GCP with 2 nodes (Apache httpd), with domain lblb.tonegroup.net.
Currently my load balancer is working fine, the traffic is switching over between the 2 nodes, but how do i configure to redirect http://lblb.tonegroup.net to https://lblb.tonegroup.net ?
Is it possible to configure it at the load balancer level or I need to configure it at apache level? I have Google Managed SSL cert installed FYI.
Right now the redirection from http to https is possible with the Load Balancer's Traffic Management.
Below is an example of how to set it up on their documentation:
https://cloud.google.com/load-balancing/docs/https/setting-up-traffic-management#console
Basically you will create two of each "forwarding rules", targetproxy and urlmap.
2 URLMaps
In 1st URL map you will just set a redirection. The define redirection rules are below and no backend service is needed to be define here
httpsRedirect: true
redirectResponseCode: FOUND
In 2nd map you will have to define your backend services there
2 forwarding rules
1st forwarding rule is to serve http request so basically port 80
2nd forwarding rule is to serve http request so port 443
2 targetproxy
1st target proxy is targetHttpProxy, this will where the 1st forwarding rule is forwarded to and is mapped to the 1st URLMap
2nd target proxy is targetHttpsProxy where the 2nd forwarding rule is forwarded to and is mapped to the 2nd URLMap
========================================================================
Below is a Cloud Deployment Manager example with Managed Certificates and Storage Buckets as the backend
storagebuckets-template.jinja
resources:
- name: {{ properties["bucketExample"] }}
type: storage.v1.bucket
properties:
storageClass: REGIONAL
location: asia-east2
cors:
- origin: ["*"]
method: [GET]
responseHeader: [Content-Type]
maxAgeSeconds: 3600
defaultObjectAcl:
- bucket: {{ properties["bucketExample"] }}
entity: allUsers
role: READER
website:
mainPageSuffix: index.html
backendbuckets-template.jinja
resources:
- name: {{ properties["bucketExample"] }}-backend
type: compute.beta.backendBucket
properties:
bucketName: $(ref.{{ properties["bucketExample"] }}.name)
enableCdn: true
ipaddresses-template.jinja
resources:
- name: lb-ipaddress
type: compute.v1.globalAddress
sslcertificates-template.jinja
resources:
- name: example
type: compute.v1.sslCertificate
properties:
type: MANAGED
managed:
domains:
- example1.com
- example2.com
- example3.com
loadbalancer-template.jinja
resources:
- name: centralized-lb-http
type: compute.v1.urlMap
properties:
defaultUrlRedirect:
httpsRedirect: true
redirectResponseCode: FOUND
- name: centralized-lb-https
type: compute.v1.urlMap
properties:
defaultService: {{ properties["bucketExample"] }}
pathMatchers:
- name: example
defaultService: {{ properties["bucketExample"] }}
pathRules:
- service: {{ properties["bucketExample"] }}
paths:
- /*
hostRules:
- hosts:
- example1.com
pathMatcher: example
- hosts:
- example2.com
pathMatcher: example
- hosts:
- example3.com
pathMatcher: example
httpproxies-template.jinja
resources:
- name: lb-http-proxy
type: compute.v1.targetHttpProxy
properties:
urlMap: $(ref.centralized-lb-http.selfLink)
- name: lb-https-proxy
type: compute.v1.targetHttpsProxy
properties:
urlMap: $(ref.centralized-lb-https.selfLink)
sslCertificates: [$(ref.example.selfLink)]
- name: lb-http-forwardingrule
type: compute.v1.globalForwardingRule
properties:
target: $(ref.lb-http-proxy.selfLink)
IPAddress: $(ref.lb-ipaddress.address)
IPProtocol: TCP
portRange: 80-80
- name: lb-https-forwardingrule
type: compute.v1.globalForwardingRule
properties:
target: $(ref.lb-https-proxy.selfLink)
IPAddress: $(ref.lb-ipaddress.address)
IPProtocol: TCP
portRange: 443-443
templates-bundle.yaml
imports:
- path: backendbuckets-template.jinja
- path: httpproxies-template.jinja
- path: ipaddresses-template.jinja
- path: loadbalancer-template.jinja
- path: storagebuckets-template.jinja
- path: sslcertificates-template.jinja
resources:
- name: storagebuckets
type: storagebuckets-template.jinja
properties:
bucketExample: example-sb
- name: backendbuckets
type: backendbuckets-template.jinja
properties:
bucketExample: example-sb
- name: loadbalancer
type: loadbalancer-template.jinja
properties:
bucketExample: $(ref.example-sb-backend.selfLink)
- name: ipaddresses
type: ipaddresses-template.jinja
- name: httpproxies
type: httpproxies-template.jinja
- name: sslcertificates
type: sslcertificates-template.jinja
$ gcloud deployment-manager deployments create infrastructure --config=templates-bundle.yaml > output
command output
NAME TYPE STATE ERRORS INTENT
centralized-lb-http compute.v1.urlMap COMPLETED []
centralized-lb-https compute.v1.urlMap COMPLETED []
example compute.v1.sslCertificate COMPLETED []
example-sb storage.v1.bucket COMPLETED []
example-sb-backend compute.beta.backendBucket COMPLETED []
lb-http-forwardingrule compute.v1.globalForwardingRule COMPLETED []
lb-http-proxy compute.v1.targetHttpProxy COMPLETED []
lb-https-forwardingrule compute.v1.globalForwardingRule COMPLETED []
lb-https-proxy compute.v1.targetHttpsProxy COMPLETED []
lb-ipaddress compute.v1.globalAddress COMPLETED []
It is not possible to do that directly on GCP Load balancer.
One possibility is to make the redirection on your backend service. GCP Loader balancer add x-forwarded-proto property in requests headers which is equal to http or https. You could add a condition based on this property to make a redirection.
I believe the previous answer provided by Alexandre is correct; currently, it's not possible to redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer. I have found a feature request already submitted for this feature; you can access it and add your comment using this link.
You have also mentioned you are using Google managed SSL certificate but the only workaround I found is to redirect it in the Server level. In such scenario, you would have to use self-managed SSL certificate.
To redirect HTTP URLs to HTTPS, do the following in Apache server:
<VirtualHost *:80>
ServerName www.example.com
Redirect "/" "https://www.example.com/"
</VirtualHost>
<VirtualHost *:443>
ServerName www.example.com
# ... SSL configuration goes here
</VirtualHost>
You would have to configure an Apache server configuration file. Refer to the apache.org documentation on Simple Redirection for more details.
Maybe it's too late, but I had the same problem and here my solution:
Configure two frontends on GCP Load balancer(HTTP and HTTPS).
Set port 80(http protocol) to communication to backend service and final VMs.
On the backend service add the Google variable: {tls_version} as X-SSL-Protocol custom header.
On final servers perform redirection based on X-SSL-Protocol value:
If empty(no https), redirect(301), otherwise do nothing.
You can check the header value on your web server or from an intermediate load balancer VM instance. My case with HAProxy:
frontend fe_http
bind *:80
mode http
#check if value is empty
acl is_http res.hdr(X-SSL-Protocol) -m len 0
#perform redirection only if no value found in custom header
redirect scheme https code 301 if is_http
#when redirect is performed, subsequent instructions are not reached
default_backend bk_http1
If you use Terraform (highly recommend for GCP configuration), here's a sample config. This code creates two IP addresses (v4 & v6) -- which you would use in your https forwarding rules as well.
// HTTP -> HTTPS redirector
resource "google_compute_url_map" "http-to-https" {
name = "my-http-to-https"
default_url_redirect {
https_redirect = true
strip_query = false
redirect_response_code = "PERMANENT_REDIRECT"
}
}
resource "google_compute_target_http_proxy" "proxy" {
name = "my-http-proxy"
url_map = google_compute_url_map.http-to-https.self_link
}
resource "google_compute_global_forwarding_rule" "http-v4" {
name = "my-fwrule-http-v4"
target = google_compute_target_http_proxy.proxy.self_link
ip_address = google_compute_global_address.IPv4.address
port_range = "80"
}
resource "google_compute_global_forwarding_rule" "http-v6" {
name = "my-fwrule-http-v6"
target = google_compute_target_http_proxy.proxy.self_link
ip_address = google_compute_global_address.IPv6.address
port_range = "80"
}
resource "google_compute_global_address" "IPv4" {
name = "my-ip-v4-address"
}
resource "google_compute_global_address" "IPv6" {
name = "my-ip-v6-address"
ip_version = "IPV6"
}
At a high level, to redirect HTTP traffic to HTTPS, you must do the following:
Create HTTPS LB1 (called here web-map-https).
Create HTTP LB2 (no backend) (called here web-map-http) with the same IP address used in LB1 and a redirect configured in the URL map.
Please check:
https://cloud.google.com/load-balancing/docs/https/setting-up-http-https-redirect
Perhaps I'm late to the game but I use the following:
[ingress.yaml]:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-external-ip
networking.gke.io/managed-certificates: my-google-managed-certs
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: redirect-frontend-config
spec:
defaultBackend:
service:
name: online-service
port:
number: 80
[redirect-frontend-config.yaml]
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: redirect-frontend-config
spec:
redirectToHttps:
enabled: true
I'm using the default "301 Moved Permanently", but if you'd like to use something else, just add a row under redirectToHttps containing
responseCodeName: <CHOSEN REDIRECT RESPONSE CODE>
MOVED_PERMANENTLY_DEFAULT to return a 301 redirect response code (default).
FOUND to return a 302 redirect response code.
SEE_OTHER to return a 303 redirect response code.
TEMPORARY_REDIRECT to return a 307 redirect response code.
PERMANENT_REDIRECT to return a 308 redirect response code.
Further reading at
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress

Resources