Ingress-controller forcing SSL on port 80 (HTTP) - azure

I'm running a node.js application in Kubernetes and I'm using an ingress controller to use SSL on port 443. It's working, however it is forcing me to use SSL on port 80 as well which I don't want.
I've tried adding nginx.ingress.kubernetes.io/ssl-redirect: "false" to my ingress but it still forces SSL on port 80. This is what the ingress config looks like:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "http-rest-api-ingress",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/http-rest-api-ingress",
"uid": "f89ad4cd-9d9d-11e9-9d4d-1aa06e634f15",
"resourceVersion": "1990981",
"generation": 2,
"creationTimestamp": "2019-07-03T14:22:21Z",
"annotations": {
"nginx.ingress.kubernetes.io/rewrite-target": "/$1",
"nginx.ingress.kubernetes.io/ssl-redirect": "false"
}
},
"spec": {
"rules": [
{
"host": "x.x.com",
"http": {
"paths": [
{
"backend": {
"path": "/(.*)",
"serviceName": "http-rest-api",
"servicePort": 4500
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}

Related

Mongo client in Node doesn't receive connection response

I currently have a set of nodejs modules (containers) that connect to a local mongodb (also a container).
All modules works perfectly but with one of the I have the following problem:
https://i.stack.imgur.com/xuFqw.png
The connection seems to be accepted by mongo but the client doesn't receive the response and keeps waiting until it times out after 30 second (watch timestamps in the log)
The code of the client is the following:
async function connectToMongo() {
try {
const mongoUrl = 'mongodb://mongo:27017/LocalDB'
let result = await MongoClient.connect(mongoUrl, { useUnifiedTopology: true })
logger.info("Connected to Mongo DB")
return result
} catch (err) {
logger.error("Cannot connect to Mongo - " + err)
throw err
}
}
And is exactly the same as the other modules (that works).
Mongo is started without any settings or authentication.
I have tried using the following hostnames:
mongo : name of the container in which mongo is running
host.docker.internal : to address the host since the 27017 port of the container is binded to the host port
I honestly don't know what to try anymore...does anybody have an idea about what could be the problem?
UPDATE: as requested i'm adding info about how i'm starting the containers. Anyway I have omitted this part because they are started as Azure IoT Edge Modules. Anyway here is the infos from the deployment template:
"modules": {
"mongo": {
"settings": {
"image": "mongodb:0.0.1-replset-noauth",
"createOptions": "{\"HostConfig\":
{\"PortBindings\":{\"27017/tcp\":[{\"HostPort\":\"27017\"}]}}}"
},
"type": "docker",
"version": "1.0",
"status": "running",
"restartPolicy": "always"
},
"healthcheck": {
"settings": {
"image": "${MODULES.HealthCheck.debug}",
"createOptions": {
"HostConfig": {
"PortBindings": {
"9229/tcp": [
{
"HostPort": "9233"
}
]
}
}
}
},
"type": "docker",
"version": "1.0",
"env": {
"MONGO_CONTAINER_NAME": {
"value": "mongo"
},
"MONGO_SERVER_PORT": {
"value": "27017"
},
"DB_NAME": {
"value": "LocalDB"
}
},
"status": "running",
"restartPolicy": "always"
}
},
While this is the configuration of another module that perfectly works:
"shovel": {
"settings": {
"image": "${MODULES.Shovel.debug}",
"createOptions": {
"HostConfig": {
"PortBindings": {
"9229/tcp": [
{
"HostPort": "9229"
}
]
}
}
}
},
"type": "docker",
"version": "1.0",
"env": {
"MONGO_CONTAINER_NAME": {
"value": "mongo"
},
"MONGO_SERVER_PORT": {
"value": "27017"
},
"DB_NAME": {
"value": "LocalDB"
}
},
"status": "running",
"restartPolicy": "always"
},
(it's basically the same)
The modules are currently being launched using the IoT Simulator or on an IoT Edge device as Single Device deployments.

Azure Application Gateway DNS returning 307 to backend pool

I am trying to configure Azure Application Gateway with Basic Rule. For my Frontend IP, I have created set DNS name to whatever.canadacentral.cloudapp.azure.com and uploaded a self-signed certificate. When I hit https:// everything works correctly however when I go to https://whatever.canadacentral.cloudapp.azure.com it returns 307 redirecting me to my backend pool https://whatever.azurewebsites.net/
Is this something to do with canadacentral.cloudapp.azure.com and I need to provide custom DNS?
Here's my template for Application Gateway:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"applicationGateways_ExampleDev_name": {
"defaultValue": "ExampleDev",
"type": "String"
},
"virtualNetworks_Ex_DEV_externalid": {
"defaultValue": "/subscriptions/xxx/resourceGroups/Example-Ex-DEV/providers/Microsoft.Network/virtualNetworks/Ex-DEV",
"type": "String"
},
"publicIPAddresses_ExampleDevIP_externalid": {
"defaultValue": "/subscriptions/xxx/resourceGroups/Example-Ex-DEV/providers/Microsoft.Network/publicIPAddresses/ExampleDevIP",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Network/applicationGateways",
"apiVersion": "2019-09-01",
"name": "[parameters('applicationGateways_ExampleDev_name')]",
"location": "canadacentral",
"properties": {
"sku": {
"name": "WAF_v2",
"tier": "WAF_v2"
},
"gatewayIPConfigurations": [
{
"name": "appGatewayIpConfig",
"properties": {
"subnet": {
"id": "[concat(parameters('virtualNetworks_Ex_DEV_externalid'), '/subnets/default')]"
}
}
}
],
"sslCertificates": [
{
"name": "ApplicationGateway",
"properties": {}
}
],
"trustedRootCertificates": [],
"frontendIPConfigurations": [
{
"name": "appGwPublicFrontendIp",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[parameters('publicIPAddresses_ExampleDevIP_externalid')]"
}
}
}
],
"frontendPorts": [
{
"name": "port_80",
"properties": {
"port": 80
}
},
{
"name": "port_443",
"properties": {
"port": 443
}
}
],
"backendAddressPools": [
{
"name": "ExampleApiDev",
"properties": {
"backendAddresses": [
{
"fqdn": "Exampleapi-dev.azurewebsites.net"
}
]
}
},
{
"name": "ExampleAuthDev",
"properties": {
"backendAddresses": [
{
"fqdn": "Exampleauth-dev.azurewebsites.net"
}
]
}
},
{
"name": "ExampleAppDev",
"properties": {
"backendAddresses": [
{
"fqdn": "Exampleapp-dev.azurewebsites.net"
}
]
}
}
],
"backendHttpSettingsCollection": [
{
"name": "default",
"properties": {
"port": 80,
"protocol": "Http",
"cookieBasedAffinity": "Disabled",
"pickHostNameFromBackendAddress": true,
"affinityCookieName": "ApplicationGatewayAffinity",
"requestTimeout": 20,
"probe": {
"id": "[concat(resourceId('Microsoft.Network/applicationGateways', parameters('applicationGateways_ExampleDev_name')), '/probes/defaultxxx')]"
}
}
}
],
"httpListeners": [
{
"name": "public-https",
"properties": {
"frontendIPConfiguration": {
"id": "[concat(resourceId('Microsoft.Network/applicationGateways', parameters('applicationGateways_ExampleDev_name')), '/frontendIPConfigurations/appGwPublicFrontendIp')]"
},
"frontendPort": {
"id": "[concat(resourceId('Microsoft.Network/applicationGateways', parameters('applicationGateways_ExampleDev_name')), '/frontendPorts/port_443')]"
},
"protocol": "Https",
"sslCertificate": {
"id": "[concat(resourceId('Microsoft.Network/applicationGateways', parameters('applicationGateways_ExampleDev_name')), '/sslCertificates/ApplicationGateway')]"
},
"hostNames": [],
"requireServerNameIndication": false
}
}
],
"urlPathMaps": [],
"requestRoutingRules": [
{
"name": "basic",
"properties": {
"ruleType": "Basic",
"httpListener": {
"id": "[concat(resourceId('Microsoft.Network/applicationGateways', parameters('applicationGateways_ExampleDev_name')), '/httpListeners/public-https')]"
},
"backendAddressPool": {
"id": "[concat(resourceId('Microsoft.Network/applicationGateways', parameters('applicationGateways_ExampleDev_name')), '/backendAddressPools/ExampleApiDev')]"
},
"backendHttpSettings": {
"id": "[concat(resourceId('Microsoft.Network/applicationGateways', parameters('applicationGateways_ExampleDev_name')), '/backendHttpSettingsCollection/default')]"
}
}
}
],
"probes": [
{
"name": "default07a3e3ac-3c07-40f6-ad80-837f4cdd1009",
"properties": {
"protocol": "Http",
"path": "/swagger/index.html",
"interval": 30,
"timeout": 30,
"unhealthyThreshold": 3,
"pickHostNameFromBackendHttpSettings": true,
"minServers": 0,
"match": {
"statusCodes": [
"200-399"
]
}
}
}
],
"rewriteRuleSets": [],
"redirectConfigurations": [],
"webApplicationFirewallConfiguration": {
"enabled": true,
"firewallMode": "Prevention",
"ruleSetType": "OWASP",
"ruleSetVersion": "3.0",
"disabledRuleGroups": [],
"exclusions": [],
"requestBodyCheck": true,
"maxRequestBodySizeInKb": 128,
"fileUploadLimitInMb": 50
},
"enableHttp2": false,
"autoscaleConfiguration": {
"minCapacity": 0,
"maxCapacity": 2
}
}
}
]
}
In this case, for application gateway V2, you have two solutions from this document.
Rewrite the location header
Set the host name in the location header to the application gateway's
domain name. To do this, create a rewrite rule with a condition that
evaluates if the location header in the response contains
azurewebsites.net. It must also perform an action to rewrite the
location header to have the application gateway's host name.
Use a custom domain name
In this way, you must own a custom domain and add custom domain in app servvice, see Map an existing custom DNS name to Azure App Service. You could follow this process:

Log rotation on IoT Edge Agent

I am following Microsoft documentation and trying to set the logs for edge Agent through the container options. My deployment.template.json file is as follows:
{
"$schema-template": "2.0.0",
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"schemaVersion": "1.0",
"runtime": {
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {
"myRegistryName": {
"username": "$CONTAINER_REGISTRY_USERNAME",
"password": "$CONTAINER_REGISTRY_PASSWORD",
"address": "myRegistryAddress.azurecr.io"
}
}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": {
"HostConfig": {
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "10m",
"max-file": "3"
}
}
}
}
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"PortBindings": {
"5671/tcp": [
{
"HostPort": "5671"
}
],
"8883/tcp": [
{
"HostPort": "8883"
}
],
"443/tcp": [
{
"HostPort": "443"
}
]
},
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "10m",
"max-file": "3"
}
}
}
}
}
}
},
"modules": {
"Module_Name": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "${MODULES.Module_Name}",
"createOptions": {
"HostConfig": {
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "10m",
"max-file": "3"
}
}
}
}
}
}
}
}
},
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {
"route": "FROM /messages/* INTO $upstream"
},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
}
}
When I build and deploy it on my device, edge Hub and my module log rotation are applied but edgeAgent log rotation is not applied. I check the log rotation settings in the
/var/lib/docker/containers/{container_id}/hostconfig.json file.
What I have done so far:
Removed the image through sudo docker rmi sudo docker rmi mcr.microsoft.com/azureiotedge-agent:1.0 and removed all the containers including the edgeAgent container and restarted the edge environment by sudo systemctl restart iotedge. The log rotation is still not applied to the new container created by the edge run time. I am not sure, what am I missing ? Any help is appreciated. Please note, I don't want to apply log rotation by creating a daemon.json file and placing it in the edge run time folder. I need to do it through the container options specified in the deployment.template.json file.
This is a known bug where edge agent deployment does not apply if version number is identical to the one in config.yaml; please help to create a github issue for it. To workaround, please set the options in config.yaml.

Configure Ingress controller to forward custom http headers

We are setting up an AKS cluster on Azure, following this guide
We are running 5 .Net Core API's behind an ingress controller, everything works fine, requests are being routed nicely.
However, in our SPA Frontend, we are sending a custom http header to our API's, this header never seems to make it to the API's, when we inspect the logging in AKS we see the desired http header is empty.
In development, everything works fine, we also see the http header is filled in our test environment in AKS, so i'm guessing ingress is blocking these custom headers.
Is there any configuration required to make ingress pass through custom http headers?
EDIT:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "myappp-ingress",
"namespace": "myapp",
"selfLink": "/apis/extensions/v1beta1/namespaces/myapp/ingresses/myapp-ingress",
"uid": "...",
"resourceVersion": "6395683",
"generation": 4,
"creationTimestamp": "2018-11-23T13:07:47Z",
"annotations": {
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/allow-headers": "My_Custom_Header", //this doesn't work
"nginx.ingress.kubernetes.io/proxy-body-size": "8m",
"nginx.ingress.kubernetes.io/rewrite-target": "/"
}
},
"spec": {
"tls": [
{
"hosts": [
"myapp.com"
],
"secretName": "..."
}
],
"rules": [
{
"host": "myapp.com",
"http": {
"paths": [
{
"path": "/api/tenantconfig",
"backend": {
"serviceName": "tenantconfig-api",
"servicePort": 80
}
},
{
"path": "/api/identity",
"backend": {
"serviceName": "identity-api",
"servicePort": 80
}
},
{
"path": "/api/media",
"backend": {
"serviceName": "media-api",
"servicePort": 80
}
},
{
"path": "/api/myapp",
"backend": {
"serviceName": "myapp-api",
"servicePort": 80
}
},
{
"path": "/app",
"backend": {
"serviceName": "client",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
I ended up using the following configuration snippet:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header My-Custom-Header $http_my_custom_header;
nginx makes all custom http headers available as embedded variable via the $http_ prefix, see this
If I want my ingress controller pass a custom header to my backend service, I can use this annotation in my ingress rule
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $req_id";
By default ingress doesn’t pass through headers with underscores.
You could set
enable-underscores-in-headers: true
See https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#enable-underscores-in-headers
Also Ingress doesn’t pass through Authorization header

SNAT Ports for Outgoing Connection from Azure (Stack)

I don't manage to get a working outgoing connection from my load-balanced VMs created in Azure Stack. I have scenario 2 of the documentation: "Public Load Balancer associated with a VM (no Instance Level Public IP address on the instance)". Only port 80 is working as an outgoing connection out of the box. I am behind an additional firewall and I ask myself whether I have to open any other specific ports to allow communication to the internet. Three questions:
Is it the problem, that the port I try to access from inside the VMs is translated to a different SNAT port by the load-balancer?
The documentation says something about the amount of used SNAT ports, but it does not say which SNAT ports are used? Which ports do I have to open in the outer firewall?
Why is port 80 working out of the box? I can per default access the web from within the VMs. This means, that it is possible to reach the public internet. I did not include any additional rule for port 80 myself.
I found the following in the incoming connection section of the Azure security group: There are explicitly mentioned ephemeral ports between 49152 and 65534. Unfortunately, opening these ports in the outgoing connections of our outer firewall didn't do the trick either. All VM internal firewalls are open on all ports.
I created the cluster using the following template. The SKU should be the default one "standard", since I did not specify anything else for the LoadBalancer.
{
"apiVersion": "[variables('lbApiVersion')]",
"type": "Microsoft.Network/loadBalancers",
"name": "[concat('LB','-', parameters('clusterName'),'-',variables('vmNodeType0Name'))]",
"location": "[variables('location')]",
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/',concat(variables('lbIPName'),'-','0'))]"
],
"properties": {
"frontendIPConfigurations": [
{
"name": "LoadBalancerIPConfig",
"properties": {
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(variables('lbIPName'),'-','0'))]"
}
}
}
],
"backendAddressPools": [
{
"name": "LoadBalancerBEAddressPool",
"properties": {}
}
],
"loadBalancingRules": [
{
"name": "LBRule",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[variables('nt0fabricTcpGatewayPort')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[variables('nt0fabricTcpGatewayPort')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[variables('lbProbeID0')]"
},
"protocol": "tcp"
}
},
{
"name": "LBHttpRule",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[variables('nt0fabricHttpGatewayPort')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[variables('nt0fabricHttpGatewayPort')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[variables('lbHttpProbeID0')]"
},
"protocol": "tcp"
}
},
{
"name": "AppPortLBRule1",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[parameters('loadBalancedAppPort1')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort1')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe1')]"
},
"protocol": "tcp"
}
},
{
"name": "AppPortLBRule2",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[parameters('loadBalancedAppPort2')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort2')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe2')]"
},
"protocol": "tcp"
}
}
],
"probes": [
{
"name": "FabricGatewayProbe",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[variables('nt0fabricTcpGatewayPort')]",
"protocol": "tcp"
}
},
{
"name": "FabricHttpGatewayProbe",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[variables('nt0fabricHttpGatewayPort')]",
"protocol": "tcp"
}
},
{
"name": "AppPortProbe1",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[parameters('loadBalancedAppPort1')]",
"protocol": "tcp"
}
},
{
"name": "AppPortProbe2",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[parameters('loadBalancedAppPort2')]",
"protocol": "tcp"
}
}
],
"inboundNatPools": [
{
"name": "LoadBalancerBEAddressNatPool",
"properties": {
"backendPort": "3389",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPortRangeEnd": "4500",
"frontendPortRangeStart": "3389",
"protocol": "tcp"
}
}
]
},
"tags": {
"resourceType": "Service Fabric",
"clusterName": "[parameters('clusterName')]"
}
},
To make it short? How to realize outgoind connections from Azure VMs.
For your issue, I will tell you all I know. Hope it will help you.
Is it the problem, that the port I try to access from inside the VMs
is translated to a different SNAT port by the load-balancer?
No, use SNAT rules, you can translate access flow from the Internet to a different port or not, all as you want. Azure Load Balancer SNAT rules mean you can connect to the VM
port A inside from the Internet with port B. Port A and port B can be the same or not.
The documentation says something about the amount of used SNAT ports,
but it does not say which SNAT ports are used? Which ports do I have
to open in the outer firewall?
As I test, you even can use port 1 in Load Balancer NAT rules. So I assume that the document means how many ports can be used per IP configuration. I suggest you can read the document again and understand carefully.
Why is port 80 working out of the box? I can per default access the
web from within the VMs. This means, that it is possible to reach the
public internet. I did not include any additional rule for port 80
myself.
For this issue, you should make sure for some things. First, if you have a public IP associated to your VM except the Load Balancer. Second, you should take a look in the Azure portal if there are any other NAT rules. Or you can use cli command az network lb inbound-nat-rule list.

Resources