We are setting up an AKS cluster on Azure, following this guide
We are running 5 .Net Core API's behind an ingress controller, everything works fine, requests are being routed nicely.
However, in our SPA Frontend, we are sending a custom http header to our API's, this header never seems to make it to the API's, when we inspect the logging in AKS we see the desired http header is empty.
In development, everything works fine, we also see the http header is filled in our test environment in AKS, so i'm guessing ingress is blocking these custom headers.
Is there any configuration required to make ingress pass through custom http headers?
EDIT:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "myappp-ingress",
"namespace": "myapp",
"selfLink": "/apis/extensions/v1beta1/namespaces/myapp/ingresses/myapp-ingress",
"uid": "...",
"resourceVersion": "6395683",
"generation": 4,
"creationTimestamp": "2018-11-23T13:07:47Z",
"annotations": {
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/allow-headers": "My_Custom_Header", //this doesn't work
"nginx.ingress.kubernetes.io/proxy-body-size": "8m",
"nginx.ingress.kubernetes.io/rewrite-target": "/"
}
},
"spec": {
"tls": [
{
"hosts": [
"myapp.com"
],
"secretName": "..."
}
],
"rules": [
{
"host": "myapp.com",
"http": {
"paths": [
{
"path": "/api/tenantconfig",
"backend": {
"serviceName": "tenantconfig-api",
"servicePort": 80
}
},
{
"path": "/api/identity",
"backend": {
"serviceName": "identity-api",
"servicePort": 80
}
},
{
"path": "/api/media",
"backend": {
"serviceName": "media-api",
"servicePort": 80
}
},
{
"path": "/api/myapp",
"backend": {
"serviceName": "myapp-api",
"servicePort": 80
}
},
{
"path": "/app",
"backend": {
"serviceName": "client",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
I ended up using the following configuration snippet:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header My-Custom-Header $http_my_custom_header;
nginx makes all custom http headers available as embedded variable via the $http_ prefix, see this
If I want my ingress controller pass a custom header to my backend service, I can use this annotation in my ingress rule
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $req_id";
By default ingress doesn’t pass through headers with underscores.
You could set
enable-underscores-in-headers: true
See https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#enable-underscores-in-headers
Also Ingress doesn’t pass through Authorization header
Related
The project has two authorization systems, basic auth and bearer. I need for each request after clicking on the "try it out" and "execute" buttons to attach to the request the Authorization headers, in which there will be a Basic line and a jwt header, in which there will be a bearer token. The problem is that I can attach these headers individually, but not together. There is a feeling that both authorizations want to write to the Authorization header and one of them overwrites the other, even though I explicitly indicated the header names in the schema.
My schemas:
{
"securitySchemes": {
"Bearer": {
"in": "header",
"name": "jwt",
"type": "http",
"scheme": "bearer"
},
"basicAuth": {
"type": "http",
"scheme": "basic"
}
}
}
and how I use it:
{
"/channel/base-list": {
"get": {
"tags": [
"CMS Channel"
],
"security": [
{
"Bearer": [],
"basicAuth": []
}
],
"summary": "Get _id and title of all channels",
"produces": [
"application/json"
],
"parameters": [
{
"in": "query",
"name": "count",
"required": false,
"schema": {
"type": "Integer"
},
"default": 25,
"example": 10
},
{
"in": "query",
"name": "search",
"required": false,
"schema": {
"type": "String"
},
"description": "Channel name"
}
],
"responses": {
"200": {
"description": "A list of channels",
"content": {
"application/json": {
"schema": {
"$ref": "#/definitions/get-channel-base-list"
}
}
}
}
}
}
}
}
I use swagger-ui-express for node.JS and OpenAPI 3.0
A request can contain only one Authorization header, and the Authorization header can only contain a single set of credentials (i.e. either Basic or Bearer, but not both). Your use case is not supported by the HTTP protocol.
In Manifest V3 Chrome's team introduced declarativeNetRequest. We can't seem to make sure those rules are applied only to sites of a certain domain:
[
{
"id": 1,
"priority": 1,
"action": {
"type": "block"
},
"condition": {
"urlFilter": "main.js",
"resourceTypes": ["script"]
}
}
]
We defined, those rules are fired in every webpage you visit. Can we filter the rules by this host of rule and not by the destination of the script? We couldn't find indication for it in the docs or the examples.
Needless to say any off-docs improvisation in the manifest.json failed to leave a mark. For instance:
{
...
"declarative_net_request": {
"matches": ["https://SUB_DOMAIN_NAME.domain.com/*"], <====== this
"rule_resources": [
{
"id": "ruleset_1",
"enabled": true,
"path": "rules.json"
}
]
}
}
I am working on Azure edge module where I created a custom module that actually received a message from another module within edge runtime.
My requirement is to send data to an external gateway(example thingsboard) over the HTTP post method. In docker file, expose 80 port and bind with local machine 80 port.
The problem is whenever I tried to send message from a custom module to external gateway(thingsboards) using the HTTP post method, getting an error of HTTP 500.
If I run the container directly(not under the edge runtime) is working fine.
As my code in container want to send message to external world(external device or gateway) over internet. I am sending message to thingsboard (http://demo.thingsboard.io/api/v1/camera/telemetry).
Edge deployment manifest:
{
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"schemaVersion": "1.0",
"runtime": {
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {
"cameraedgedev": {
"username": "username",
"password": "password",
"address": "cameraedgedev.azurecr.io"
}
}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": "{}"
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}],\"8082/tcp\":[{\"HostPort\":\"8082\"}]}}}"
}
}
},
"modules": {
"lvaEdge": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/media/live-video-analytics:2",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"8082/tcp\":[{\"HostPort\":\"8082\"}]},\"LogConfig\":{\"Type\":\"\",\"Config\":{\"max-size\":\"10m\",\"max-file\":\"10\"}},\"Binds\":[\"/var/media:/var/media/\",\"/var/lib/azuremediaservices:/var/lib/azuremediaservices\"]}}"
}
},
"rtspsim": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/cameraedgevm_user/samples/input:/live/mediaServer/media\"]}}"
}
},
"lvaSupport": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "cameraedgedev.azurecr.io/lvasupport:1.2",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"80/tcp\":[{\"HostPort\":\"80\"}]}}}"
}
},
"ai_module": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"env": {
"PYTHONUNBUFFERED": {
"value": "1"
},
"NVIDIA_VISIBLE_DEVICES": {
"value": "all"
}
},
"settings": {
"image": "cameraedgedev.azurecr.io/ai_module:1.2",
"createOptions": "{\"HostConfig\":{\"Runtime\":\"nvidia\",\"PortBindings\":{\"8082/tcp\":[{\"HostPort\":\"8082\"}]},\"Binds\":[\"/home/cameraedgevm_user/ai_module/data:/ai_module/data/\"]}}"
}
}
}
}
},
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {
"LVAToHub": "FROM /messages/modules/lvaEdge/outputs/* INTO $upstream",
"LVATolvaSupportModule": "FROM /messages/modules/lvaEdge/outputs/* INTO BrokeredEndpoint(\"/modules/lvaSupport/inputs/input1\")"
},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
},
"lvaEdge": {
"properties.desired": {
some properties
}
},
"lvaSupport": {
"properties.desired": {
"Gateway": "http://demo.thingsboard.io/api/v1/camera/telemetry",
"gatewayMethod": "POST"
}
}
}
}
How should I resolve this problem?
How can i send message to external gateway
Thanks.
By default, your modules in Azure IoT Edge will be able to access the internet, unless the configuration was changed, or a firewall is in place that restricts you from reaching thingsboards.io To verify if your module can reach that URL, you can try the following:
docker run -i --rm -p 80:80 --network="azure-iot-edge" alpine ping thingsboard.io
This will run a container on the same Docker network as your modules and see if you can resolve the URL.
You mentioned the HTTP status code 500, which indicates that a server (from Thingsboard) is having an issue with your request. Because you mentioned that this works if you just run the container manually, this could be because your payload is different when you run your module in the Azure IoT Edge runtime. An easy way to find out is to use logging to check what you're sending to Thingsboard in each scenario.
Note: in your question, you said
In docker file, expose 80 port and bind with local machine 80 port
This is only needed if you want your module to accept incoming requests over port 80. For outgoing requests (such as messages to Thingsboard) this is not needed.
I'm running a node.js application in Kubernetes and I'm using an ingress controller to use SSL on port 443. It's working, however it is forcing me to use SSL on port 80 as well which I don't want.
I've tried adding nginx.ingress.kubernetes.io/ssl-redirect: "false" to my ingress but it still forces SSL on port 80. This is what the ingress config looks like:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "http-rest-api-ingress",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/http-rest-api-ingress",
"uid": "f89ad4cd-9d9d-11e9-9d4d-1aa06e634f15",
"resourceVersion": "1990981",
"generation": 2,
"creationTimestamp": "2019-07-03T14:22:21Z",
"annotations": {
"nginx.ingress.kubernetes.io/rewrite-target": "/$1",
"nginx.ingress.kubernetes.io/ssl-redirect": "false"
}
},
"spec": {
"rules": [
{
"host": "x.x.com",
"http": {
"paths": [
{
"backend": {
"path": "/(.*)",
"serviceName": "http-rest-api",
"servicePort": 4500
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
We configured a function proxy for our website approximately 2 months ago and got everything working as expected. Last night around 8:00-8:30pm EST the proxy stopped working, receiving "Internal server error" 500 messages when accessing it's endpoints. We haven't changed anything on our end so I don't know why this all of a sudden started.
We proxy our domain to various endpoints. The endpoint that stopped working are proxies to pages we are hosting on GitHub pages. Proxies to other services such as other Azure App service instances are still working.
I ran a Proxy-Trace-Enabled: true request to the proxy and find the following error in the trace log:
"backend": [
{
"source": "forward-request",
"timestamp": "2018-01-31T01:45:36.4810022Z",
"elapsed": "00:00:00.0037370",
"data": {
"message": "Request is being forwarded to the backend service.",
"request": {
"method": "GET",
"url": "https://xxxxxxxxxx.github.io/xxxxxxxxxx/",
"headers": [
{
"name": "Cache-Control",
"value": "no-cache"
},
{
"name": "Accept",
"value": "*/*"
},
{
"name": "Accept-Encoding",
"value": "gzip"
},
{
"name": "Cookie",
"value": "__cfduid=xxxxxxxxxx"
},
{
"name": "Max-Forwards",
"value": "10"
},
{
"name": "User-Agent",
"value": "PostmanRuntime/7.1.1"
},
{
"name": "CF-IPCountry",
"value": "US"
},
{
"name": "X-Forwarded-For",
"value": "xxxxxxxxxx, xxxxxxxxxx, xxxxxxxxxx"
},
{
"name": "CF-RAY",
"value": "xxxxxxxxxx-MIA"
},
{
"name": "X-Forwarded-Proto",
"value": "https"
},
{
"name": "CF-Visitor",
"value": "{\"scheme\":\"https\"}"
},
{
"name": "Postman-Token",
"value": "xxxxxxxxxx"
},
{
"name": "CF-Connecting-IP",
"value": "xxxxxxxxxx"
},
{
"name": "X-WAWS-Unencoded-URL",
"value": "/"
},
{
"name": "X-Original-URL",
"value": "/"
},
{
"name": "X-ARR-LOG-ID",
"value": "xxxxxxxxxx"
},
{
"name": "DISGUISED-HOST",
"value": "xxxxxxxxxx.com"
},
{
"name": "X-SITE-DEPLOYMENT-ID",
"value": "xxxxxxxxxx"
},
{
"name": "WAS-DEFAULT-HOSTNAME",
"value": "xxxxxxxxxx.azurewebsites.net"
},
{
"name": "Content-Length",
"value": "0"
}
]
}
}
},
{
"source": "forward-request",
"timestamp": "2018-01-31T01:45:36.5122512Z",
"elapsed": "00:00:00.0363283",
"data": {
"messages": [
"Error occured while calling backend service.",
"The request was aborted: Could not create SSL/TLS secure channel."
]
}
}
],
I am not sure why there are "The request was aborted: Could not create SSL/TLS secure channel." errors since I can access the GitHub pages version of the website that it is proxying to without issue (no SSL issues). We've had to disable use of the proxy for now and change our DNS to point directly to the GitHub page until we can resolve this.
What's going on here?
Why did this break all of a sudden with no changes on our end?
Maybe GitHub suddenly switched to TLS 1.2-only and walked away?
$ curl --tlsv1.0 -vki https://microsoft.github.io
...
* gnutls_handshake() failed: Error in protocol version
$ curl --tlsv1.1 -vki https://microsoft.github.io
...
* gnutls_handshake() failed: Error in protocol version
$ curl --tlsv1.2 -vki https://microsoft.github.io
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
...
* subject: C=US,ST=California,L=San Francisco,O=GitHub, Inc.,
CN=www.github.com
* issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,
CN=DigiCert SHA2 High Assurance Server CA
...
HTTP/1.1 200 OK
I don't know if you can tell Functions Proxy to use a particular TLS version for outbound connections, you know, the equivalent of
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;