EKS ALB Ingress address is given, but loading address does not work - dns

Hi I've successfully setup an ALB controller and an ingress to one of my containers.
But when I route to my address in a browser it gives me this error
Is there something more that I need to do to connect to this? I was reading some AWS guides and I think that I should be able to route to this address without doing anything in Route53?
Below is the code for my helm chart
datahub-frontend:
enabled: true
image:
repository: linkedin/datahub-frontend-react
tag: "v0.8.31"
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-2:601628467906:certificate/4a862e82-d098-4e27-9eb7-c8221df9e0cd
alb.ingress.kubernetes.io/inbound-cidrs: 0.0.0.0/0
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
hosts:
- host: datahub.hughnguyen.link
redirectPaths:
- path: /*
name: ssl-redirect
port: use-annotation
paths:
- /*

Related

Can't connect externally to Neo4j server in AKS cluster using Neo4j browser

I have an AKS cluster with a Node.js server connecting to a Neo4j-standalone instance all deployed with Helm.
I installed an ingress-nginx controller, referenced a default Let's Encrypt certificate and habilitated TPC ports with Terraform as
resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
# repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "tcp.7687"
value = "default/cluster:7687"
}
set {
name = "tcp.7474"
value = "default/cluster:7474"
}
set {
name = "tcp.7473"
value = "default/cluster:7473"
}
set {
name = "tcp.6362"
value = "default/cluster-admin:6362"
}
set {
name = "tcp.7687"
value = "default/cluster-admin:7687"
}
set {
name = "tcp.7474"
value = "default/cluster-admin:7474"
}
set {
name = "tcp.7473"
value = "default/cluster-admin:7473"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
I then have an Ingress with paths pointing to Neo4j services so on https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ I can get to the browser.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
# nginx.ingress.kubernetes.io/rewrite-target: /
# certmanager.k8s.io/acme-challenge-type: http01
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
ingress.kubernetes.io/ssl-redirect: "true"
# kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- xxxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
# - host: xxx.westeurope.cloud.app.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
# 502 bad gateway
# /any character 502 bad gatway
- path: /neo4j-tcp-bolt(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
# /browser/ show browser
#/any character shows login to xxx.westeurope.cloudapp.azure.com:443 from https, :80 from http
- path: /neo4j-tcp-http(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-http
number: 7474
- http:
paths:
- path: /neo4j-tcp-https(/|$)(.*)
# 502 bad gateway
# /any character 502 bad gatway
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-https
number: 7473
I can get to the Neo4j Browser on the https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ but using the Connect Url bolt+s//server.bolt it won't connect to the server with the error ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver..
Now I'm guessing that is because Neo4j bolt connector is not using the Certificate used by the ingress-nginxcontroller.
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe secret tls-secret
Name: tls-secret
Namespace: default
Labels: controller.cert-manager.io/fao=true
Annotations: cert-manager.io/alt-names: xxx.westeurope.cloudapp.azure.com
cert-manager.io/certificate-name: tls-certificate
cert-manager.io/common-name: xxx.westeurope.cloudapp.azure.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group:
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-issuer
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 5648 bytes
tls.key: 1679 bytes
I tried to use it overriding the chart values, but then the Neo4j driver from Node.js server won't connect to the server ..
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret # we set up the template to grab `private.key` from this secret
subPath: tls.key # we specify the privateKey value name to get from the secret
publicCertificate:
secretName: tls-secret # we set up the template to grab `public.crt` from this secret
subPath: tls.crt # we specify the publicCertificate value name to get from the secret
trustedCerts:
sources: [ ] # a sources array for a projected volume - this allows someone to (relatively) easily mount multiple public certs from multiple secrets for example.
revokedCerts:
sources: [ ] # a sources array for a projected volume
https:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
Is there a way to use it or should I setup another certificate just for Neo4j? If so what would it be the dnsNames to set on it?
Is there something else I'm doing wrong?
Thank you very much.
From what I can gather from your information, the problem seems to be that you're trying to expose the bolt port behind an ingress. Ingresses are implemented as an L7 (protocol aware) reverse proxy and manage load-balancing etc. The bolt protocol has its load balancing and routing for cluster applications. So you will need to expose the network service directly for every instance of neo4j you are running.
Check out this part of the documentation for more information:
https://neo4j.com/docs/operations-manual/current/kubernetes/accessing-neo4j/#access-outside-k8s
Finally after a few days of going in circles I found what the problems were..
First using a Staging certificate will cause Neo4j bolt connection to fail, as it's not Trusted, with error:
ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
found here https://grishagin.com/neo4j/2022/03/29/neo4j-websocket-issue.html
Then I was missing to assign a general listening address to the bolt connector as it's listening by default only to 127.0.0.0:7687 https://neo4j.com/docs/operations-manual/current/configuration/connectors/
To listen for Bolt connections on all network interfaces (0.0.0.0)
so I added server.bolt.listen_address: "0.0.0.0:7687" to Neo4j chart values config.
Next, as I'm connecting the default neo4j ClusterIP service tcp ports to the ingress controller's exposed TCP connections through the Ingress as described here https://neo4j.com/labs/neo4j-helm/1.0.0/externalexposure/ as an alternative to using a LoadBalancer, the Neo4j LoadBalancer services is not needed so the services:neo4j:enabled gets set to "false", in my tests I actually found that if you leave it enabled bolt won't connect despite setting everything correctly..
Other Neo4j missing config where server.bolt.enabled : "true", server.bolt.tls_level: "REQUIRED", dbms.ssl.policy.bolt.client_auth: "NONE" and dbms.ssl.policy.bolt.enabled: "true" the complete list of config options is here https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
Neo4j chart's values for ssl config were fine.
So now I can use the (renamed for brevity) path /neo4j/browser/ to serve the Neo4j Browser app, and either the /bolt path as the browser Connect URL, or PublicIP's <DSN>:<bolt port>.
You are connected as user neo4j
to bolt+s://xxxx.westeurope.cloudapp.azure.com/bolt
Connection credentials are stored in your web browser.
Hope this explanation and the code recap below will help others.
Cheers.
ingress controller
resource "helm_release" "nginx" {
name = "ingress-nginx"
namespace = "default"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
set {
name = "version"
value = "4.4.2"
}
### expose tcp connections for neo4j service
### bolt url connection port
set {
name = "tcp.7687"
value = "default/neo4j:7687"
}
### http browser app port
set {
name = "tcp.7474"
value = "default/neo4j:7474"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
spec:
ingressClassName: nginx
tls:
- hosts:
- xxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
- path: /bolt(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
- path: /neo4j(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-http
number: 7474
Values.yaml (Umbrella chart)
neo4j-db: #chart dependency alias
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "xxxx" # this will be the label: app: value for the service selector
password: "xxxxx"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser, this will create "cluster-neo4j" svc
neo4j:
enabled: false
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ] # a sources array for a projected volume

SocketIO VUe app not able to connect to backend getting status 400

I have a Vue.js application that uses Socket.IO and am able to run it locally but not in a prod setup with S3 and a public socket server. The Vue.js dist/ build is on AWS S3 set up in a public website format. I have DNS and SSL provided by Cloudflare for the S3 bucket.
My socket.io server is running in a Kubernetes cluster that is created using KOPS on AWS. I have a network load balancer in front of it with the ingress being Nginx-ingress. I have added a few annotations as I have been debugging those are at the bottom of the annotations section below.
Error message:
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is established.
Issue: I am trying to get my front end to connect to the socket.io server to send messages back and forth. However, I can't due to the above error message. I am looking to figure out what is wrong that is causing this error message.
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
# needed to allow the front end to talk to the back end
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.domain.ca"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
# needed for monitoring - maybe
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
#for nginx ingress controller
ad.datadoghq.com/nginx-ingress-controller.check_names: '["nginx","nginx_ingress_controller"]'
ad.datadoghq.com/nginx-ingress-controller.init_configs: '[{},{}]'
ad.datadoghq.com/nginx-ingress-controller.instances: '[{"nginx_status_url": "http://%%host%%:18080/nginx_status"},{"prometheus_url": "http://%%host%%:10254/metrics"}]'
ad.datadoghq.com/nginx-ingress-controller.logs: '[{"service": "controller", "source":"nginx-ingress-controller"}]'
# Allow websockets to work
nginx.ingress.kubernetes.io/websocket-services: socketio
nginx.org/websocket-services: socketio
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
name: socketio-ingress
namespace: domain
spec:
rules:
- host: socket.domain.ca
http:
paths:
- backend:
serviceName: socketio
servicePort: 9000
path: /
tls:
- hosts:
- socket.domain.ca
secretName: socket-ingress-cert
socket io part of server.js
const server = http.createServer();
const io = require("socket.io")(server, {
cors: {
origin: config.CORS_SOCKET, // confirmed this is -- https://app.domain.ca -- via a console.log
},
adapter: require("socket.io-redis")({
pubClient: redisClient,
subClient: redisClient.duplicate(),
}),
});
vue.js main
const socket = io(process.env.VUE_APP_SOCKET_URL)
Vue.use(new VueSocketIO({
debug: true,
connection: socket,
vuex: {
store,
actionPrefix: "SOCKET_",
mutationPrefix: "SOCKET_"
}
})
);

How to redirect HTTP to HTTPS using GCP load balancer

I'm setting up my load balancer in GCP with 2 nodes (Apache httpd), with domain lblb.tonegroup.net.
Currently my load balancer is working fine, the traffic is switching over between the 2 nodes, but how do i configure to redirect http://lblb.tonegroup.net to https://lblb.tonegroup.net ?
Is it possible to configure it at the load balancer level or I need to configure it at apache level? I have Google Managed SSL cert installed FYI.
Right now the redirection from http to https is possible with the Load Balancer's Traffic Management.
Below is an example of how to set it up on their documentation:
https://cloud.google.com/load-balancing/docs/https/setting-up-traffic-management#console
Basically you will create two of each "forwarding rules", targetproxy and urlmap.
2 URLMaps
In 1st URL map you will just set a redirection. The define redirection rules are below and no backend service is needed to be define here
httpsRedirect: true
redirectResponseCode: FOUND
In 2nd map you will have to define your backend services there
2 forwarding rules
1st forwarding rule is to serve http request so basically port 80
2nd forwarding rule is to serve http request so port 443
2 targetproxy
1st target proxy is targetHttpProxy, this will where the 1st forwarding rule is forwarded to and is mapped to the 1st URLMap
2nd target proxy is targetHttpsProxy where the 2nd forwarding rule is forwarded to and is mapped to the 2nd URLMap
========================================================================
Below is a Cloud Deployment Manager example with Managed Certificates and Storage Buckets as the backend
storagebuckets-template.jinja
resources:
- name: {{ properties["bucketExample"] }}
type: storage.v1.bucket
properties:
storageClass: REGIONAL
location: asia-east2
cors:
- origin: ["*"]
method: [GET]
responseHeader: [Content-Type]
maxAgeSeconds: 3600
defaultObjectAcl:
- bucket: {{ properties["bucketExample"] }}
entity: allUsers
role: READER
website:
mainPageSuffix: index.html
backendbuckets-template.jinja
resources:
- name: {{ properties["bucketExample"] }}-backend
type: compute.beta.backendBucket
properties:
bucketName: $(ref.{{ properties["bucketExample"] }}.name)
enableCdn: true
ipaddresses-template.jinja
resources:
- name: lb-ipaddress
type: compute.v1.globalAddress
sslcertificates-template.jinja
resources:
- name: example
type: compute.v1.sslCertificate
properties:
type: MANAGED
managed:
domains:
- example1.com
- example2.com
- example3.com
loadbalancer-template.jinja
resources:
- name: centralized-lb-http
type: compute.v1.urlMap
properties:
defaultUrlRedirect:
httpsRedirect: true
redirectResponseCode: FOUND
- name: centralized-lb-https
type: compute.v1.urlMap
properties:
defaultService: {{ properties["bucketExample"] }}
pathMatchers:
- name: example
defaultService: {{ properties["bucketExample"] }}
pathRules:
- service: {{ properties["bucketExample"] }}
paths:
- /*
hostRules:
- hosts:
- example1.com
pathMatcher: example
- hosts:
- example2.com
pathMatcher: example
- hosts:
- example3.com
pathMatcher: example
httpproxies-template.jinja
resources:
- name: lb-http-proxy
type: compute.v1.targetHttpProxy
properties:
urlMap: $(ref.centralized-lb-http.selfLink)
- name: lb-https-proxy
type: compute.v1.targetHttpsProxy
properties:
urlMap: $(ref.centralized-lb-https.selfLink)
sslCertificates: [$(ref.example.selfLink)]
- name: lb-http-forwardingrule
type: compute.v1.globalForwardingRule
properties:
target: $(ref.lb-http-proxy.selfLink)
IPAddress: $(ref.lb-ipaddress.address)
IPProtocol: TCP
portRange: 80-80
- name: lb-https-forwardingrule
type: compute.v1.globalForwardingRule
properties:
target: $(ref.lb-https-proxy.selfLink)
IPAddress: $(ref.lb-ipaddress.address)
IPProtocol: TCP
portRange: 443-443
templates-bundle.yaml
imports:
- path: backendbuckets-template.jinja
- path: httpproxies-template.jinja
- path: ipaddresses-template.jinja
- path: loadbalancer-template.jinja
- path: storagebuckets-template.jinja
- path: sslcertificates-template.jinja
resources:
- name: storagebuckets
type: storagebuckets-template.jinja
properties:
bucketExample: example-sb
- name: backendbuckets
type: backendbuckets-template.jinja
properties:
bucketExample: example-sb
- name: loadbalancer
type: loadbalancer-template.jinja
properties:
bucketExample: $(ref.example-sb-backend.selfLink)
- name: ipaddresses
type: ipaddresses-template.jinja
- name: httpproxies
type: httpproxies-template.jinja
- name: sslcertificates
type: sslcertificates-template.jinja
$ gcloud deployment-manager deployments create infrastructure --config=templates-bundle.yaml > output
command output
NAME TYPE STATE ERRORS INTENT
centralized-lb-http compute.v1.urlMap COMPLETED []
centralized-lb-https compute.v1.urlMap COMPLETED []
example compute.v1.sslCertificate COMPLETED []
example-sb storage.v1.bucket COMPLETED []
example-sb-backend compute.beta.backendBucket COMPLETED []
lb-http-forwardingrule compute.v1.globalForwardingRule COMPLETED []
lb-http-proxy compute.v1.targetHttpProxy COMPLETED []
lb-https-forwardingrule compute.v1.globalForwardingRule COMPLETED []
lb-https-proxy compute.v1.targetHttpsProxy COMPLETED []
lb-ipaddress compute.v1.globalAddress COMPLETED []
It is not possible to do that directly on GCP Load balancer.
One possibility is to make the redirection on your backend service. GCP Loader balancer add x-forwarded-proto property in requests headers which is equal to http or https. You could add a condition based on this property to make a redirection.
I believe the previous answer provided by Alexandre is correct; currently, it's not possible to redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer. I have found a feature request already submitted for this feature; you can access it and add your comment using this link.
You have also mentioned you are using Google managed SSL certificate but the only workaround I found is to redirect it in the Server level. In such scenario, you would have to use self-managed SSL certificate.
To redirect HTTP URLs to HTTPS, do the following in Apache server:
<VirtualHost *:80>
ServerName www.example.com
Redirect "/" "https://www.example.com/"
</VirtualHost>
<VirtualHost *:443>
ServerName www.example.com
# ... SSL configuration goes here
</VirtualHost>
You would have to configure an Apache server configuration file. Refer to the apache.org documentation on Simple Redirection for more details.
Maybe it's too late, but I had the same problem and here my solution:
Configure two frontends on GCP Load balancer(HTTP and HTTPS).
Set port 80(http protocol) to communication to backend service and final VMs.
On the backend service add the Google variable: {tls_version} as X-SSL-Protocol custom header.
On final servers perform redirection based on X-SSL-Protocol value:
If empty(no https), redirect(301), otherwise do nothing.
You can check the header value on your web server or from an intermediate load balancer VM instance. My case with HAProxy:
frontend fe_http
bind *:80
mode http
#check if value is empty
acl is_http res.hdr(X-SSL-Protocol) -m len 0
#perform redirection only if no value found in custom header
redirect scheme https code 301 if is_http
#when redirect is performed, subsequent instructions are not reached
default_backend bk_http1
If you use Terraform (highly recommend for GCP configuration), here's a sample config. This code creates two IP addresses (v4 & v6) -- which you would use in your https forwarding rules as well.
// HTTP -> HTTPS redirector
resource "google_compute_url_map" "http-to-https" {
name = "my-http-to-https"
default_url_redirect {
https_redirect = true
strip_query = false
redirect_response_code = "PERMANENT_REDIRECT"
}
}
resource "google_compute_target_http_proxy" "proxy" {
name = "my-http-proxy"
url_map = google_compute_url_map.http-to-https.self_link
}
resource "google_compute_global_forwarding_rule" "http-v4" {
name = "my-fwrule-http-v4"
target = google_compute_target_http_proxy.proxy.self_link
ip_address = google_compute_global_address.IPv4.address
port_range = "80"
}
resource "google_compute_global_forwarding_rule" "http-v6" {
name = "my-fwrule-http-v6"
target = google_compute_target_http_proxy.proxy.self_link
ip_address = google_compute_global_address.IPv6.address
port_range = "80"
}
resource "google_compute_global_address" "IPv4" {
name = "my-ip-v4-address"
}
resource "google_compute_global_address" "IPv6" {
name = "my-ip-v6-address"
ip_version = "IPV6"
}
At a high level, to redirect HTTP traffic to HTTPS, you must do the following:
Create HTTPS LB1 (called here web-map-https).
Create HTTP LB2 (no backend) (called here web-map-http) with the same IP address used in LB1 and a redirect configured in the URL map.
Please check:
https://cloud.google.com/load-balancing/docs/https/setting-up-http-https-redirect
Perhaps I'm late to the game but I use the following:
[ingress.yaml]:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-external-ip
networking.gke.io/managed-certificates: my-google-managed-certs
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: redirect-frontend-config
spec:
defaultBackend:
service:
name: online-service
port:
number: 80
[redirect-frontend-config.yaml]
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: redirect-frontend-config
spec:
redirectToHttps:
enabled: true
I'm using the default "301 Moved Permanently", but if you'd like to use something else, just add a row under redirectToHttps containing
responseCodeName: <CHOSEN REDIRECT RESPONSE CODE>
MOVED_PERMANENTLY_DEFAULT to return a 301 redirect response code (default).
FOUND to return a 302 redirect response code.
SEE_OTHER to return a 303 redirect response code.
TEMPORARY_REDIRECT to return a 307 redirect response code.
PERMANENT_REDIRECT to return a 308 redirect response code.
Further reading at
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress

Kubernetes node is not accessible on port 80 and 443

I deployed a bunch of services and with all of them I have the same problem: the defined port (e.g. 80 and 443) is not accessible, but anyway the automatically assigned node port.
The following service definition is exported from the first service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "traefik",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/traefik",
"uid": "70df3a55-422c-11e8-b7c0-b827eb28c626",
"resourceVersion": "1531399",
"creationTimestamp": "2018-04-17T10:45:27Z",
"labels": {
"app": "traefik",
"chart": "traefik-1.28.1",
"heritage": "Tiller",
"release": "traefik"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": "http",
"nodePort": 31822
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": "httpn",
"nodePort": 32638
}
],
"selector": {
"app": "traefik",
"release": "traefik"
},
"clusterIP": "10.109.80.108",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {}
}
}
any idea how i can reach this service with http://node-ip-addr:80 and the other service with http://node-ip-addr:443?
The ports that you defined for your services --in this case 443 and 80-- are only reachable from within the cluster. You can try to call your service from another pod (which runs busy box, for example) with curl http://traefik.kube-system.svc.cluster.local or http://.
If you want to access your services from outside the cluster (which is your use case you need to expose your service as one of the following
NodePort
LoadBalancer
ExternalName
You chose NodePort which means that every node of the cluster listens for requests on a specific port (in your case 31822 for http and 32638 for https) which will then be delegated to your service. This is why http://node-ip-addr:31822 should work for your provided service config.
To adapt your configuration according to your requirements you must set "nodePort": 80 which in turn will reserve port 80 on every cluster node to delegate to you service. This is generally not the best idea. You would rather keep the port as currently defined and add a proxy server or a load balancer in front of your cluster which would then listen for port 80 and forward to one of the nodes to port 31822 for your service.
For more information on publishing services please refer to the docs at Kubernetes docs
Check the following working example.
Note:
The container listens at port 4000 which is specified as containerPort in the Deployment
The Service maps the container port 4000 (targetPort) to port 80
The Ingress is now pointing to servicePort 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: testui-deploy
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: testui
template:
metadata:
labels:
app: testui
spec:
containers:
- name: testui
image: gcr.io/test2018/testui:latest
ports:
- containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: testui-svc
labels:
app: testui-svc
spec:
type: NodePort
selector:
app: testui
ports:
- protocol: TCP
port: 80
targetPort: 4000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ing
annotations:
kubernetes.io/ingress.global-static-ip-name: test-ip
spec:
backend:
serviceName: testui-svc
servicePort: 80

How to start kubernetes service on NodePort outside service-node-port-range default range?

I've been trying to start kubernetes-dashboard (and eventualy other services) on a NodePort outside the default port range with little success,
here is my setup:
Cloud provider: Azure (Not azure container service)
OS: CentOS 7
here is what I have tried:
Update the host
$ yum update
Install kubeadm
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
$ setenforce 0
$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni
$ systemctl enable docker && systemctl start docker
$ systemctl enable kubelet && systemctl start kubelet
Start the cluster with kubeadm
$ kubeadm init
Allow runing containers on master node, because we have a single node cluster
$ kubectl taint nodes --all dedicated-
Install a pod network
$ kubectl apply -f https://git.io/weave-kube
Our kubernetes-dashboard Deployment (# ~/kubernetes-dashboard.yaml
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
# Comment the following annotation if Dashboard must not be deployed on master
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 8880
targetPort: 9090
nodePort: 8880
selector:
app: kubernetes-dashboard
Create our Deployment
$ kubectl create -f ~/kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
The Service "kubernetes-dashboard" is invalid: spec.ports[0].nodePort: Invalid value: 8880: provided port is not in the valid range. The range of valid ports is 30000-32767
I found out that to change the range of valid ports I could set service-node-port-range option on kube-apiserver to allow a different port range,
so I tried this:
$ kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
dummy-2088944543-lr2zb 1/1 Running 0 31m
etcd-test2-highr 1/1 Running 0 31m
kube-apiserver-test2-highr 1/1 Running 0 31m
kube-controller-manager-test2-highr 1/1 Running 2 31m
kube-discovery-1769846148-wmbhb 1/1 Running 0 31m
kube-dns-2924299975-8vwjm 4/4 Running 0 31m
kube-proxy-0ls9c 1/1 Running 0 31m
kube-scheduler-test2-highr 1/1 Running 2 31m
kubernetes-dashboard-3203831700-qrvdn 1/1 Running 0 22s
weave-net-m9rxh 2/2 Running 0 31m
Add "--service-node-port-range=8880-8880" to kube-apiserver-test2-highr
$ kubectl edit po kube-apiserver-test2-highr --namespace=kube-system
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver",
"namespace": "kube-system",
"creationTimestamp": null,
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "k8s",
"hostPath": {
"path": "/etc/kubernetes"
}
},
{
"name": "certs",
"hostPath": {
"path": "/etc/ssl/certs"
}
},
{
"name": "pki",
"hostPath": {
"path": "/etc/pki"
}
}
],
"containers": [
{
"name": "kube-apiserver",
"image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.3",
"command": [
"kube-apiserver",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=10.96.0.0/12",
"--service-node-port-range=8880-8880",
"--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/pki/ca.pem",
"--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--token-auth-file=/etc/kubernetes/pki/tokens.csv",
"--secure-port=6443",
"--allow-privileged",
"--advertise-address=100.112.226.5",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--anonymous-auth=false",
"--etcd-servers=http://127.0.0.1:2379"
],
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "k8s",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
},
{
"name": "certs",
"mountPath": "/etc/ssl/certs"
},
{
"name": "pki",
"mountPath": "/etc/pki"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 8080,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"failureThreshold": 8
}
}
],
"hostNetwork": true
},
"status": {}
$ :wq
The following is the truncated response
# pods "kube-apiserver-test2-highr" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
So I tried a different approach, I edited the deployment file for kube-apiserver with the same change described above
and ran the following:
$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.json --namespace=kube-system
And got this response:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
So now i'm stuck, how can I change the range of valid ports?
You are specifying --service-node-port-range=8880-8880 wrong. You set it to one port only, Set it to a range.
Second problem: You are setting the service to use 9090 and it's not in the range.
ports:
- port: 80
targetPort: 9090
nodePort: 9090
API Server should have a deployment too, Try to editing the port-range in the deployment itself and delete the api server pod so it gets recreated via new config.
The Service node ports range is set to infrequently-used ports for a reason. Why do you want to publish this on every node? Do you really want that?
An alternative is to expose it on a semi-random nodeport, then use a proxy pod on a known node or set of nodes to access it via hostport.
This issue:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
was caused by my port range excluding 8080, which kube-apiserver was serving on, so I could not send any updates to kubectl.
I fixed it by changing the port range to 8080-8881 and restarting the kubelet service like so:
$ service kubelet restart
Everything works as expected now.

Resources