I've managed to deploy a .netcore api to azure kubernetes managed service (ACS) and it's working as expected. The image is hosted in an azure container registry.
I'm now trying to get the service to be accessible via https. I'd like a very simple setup.
firstly, do I have to create an openssl cert or register with letencrypt? I'd ideally like to avoid having to manage ssl certs separately, but from documentation, it's not clear if this is required.
secondly, I've got a manifest file below. I can still access port 80 using this manifest. However, i am not able to access port 443. I don't see any errors, so it's not clear what the problem is. Any ideas?
thanks
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: someappservice-deployment
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
loadbalancer.openstack.org/floating-network-id: "9be23551-38e2-4d27-b5ea-ea2ea1321bd6"
spec:
replicas: 3
template:
metadata:
labels:
app: someappservices
spec:
containers:
- name: someappservices
image: myimage.azurecr.io/someappservices
ports:
- containerPort: 80
- containerPort: 443
---
kind: Service
apiVersion: v1
metadata:
name: external-http-someappservice
spec:
selector:
app: someappservices
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
From what I understand, you will need something like an NGINX ingress controller to handle the SSL termination and will also need to manage certificates. Kubernetes cert-manager is a nice package that can help with the certs.
Here is a write up on how to do both in an AKS cluster:
Deploy an HTTPS enabled ingress controller on AKS
If I do not misunderstand that you want to access your service via https with simple steps. Yes, If you don't have particularly strict security requirements such as SSL certs, you can just expose the ports to load balancer and access your service from the Internet, it's simple to configure.
The yaml file you posted looks all right. You can check from the Kubernetes dashboard and Azure portal, and the screenshot like this:
You also can check with the command kubectl get svc and the screenshot will like this:
But if you have particularly strict security requirements, you need nginx ingress controller like the answer in this case. Actually, the https is a network security protocol, you need to configure nginx ingress controller indeed.
Related
I have created a AKS and deployed a simple web server on it with following yaml.
Azure LoadBalancer gives a public IP address to it automatically and works fine.
Now I would like to limit the source IP address so I can access it from a specify IP address only.
I've tried adding a Azure Firewall to the virtual network of AKS (aks-vnet-XXXXXXX) with some network rule but doesn't work.
Creating a NAT rule in Firewall and redirects packets to the LoadBalancer works but I can still access the pod with the Public IP address of the LoadBalancer.
Any suggestions?
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
(skipped something not important)
spec:
containers:
- name: nginx
image: nginx:1.17.6
ports:
- containerPort: 80
What you're trying to achieve can be done with NSG (Network Security Group) applied to the subnet where your AKS cluster resides: https://learn.microsoft.com/en-us/azure/aks/concepts-security#network-security
More generic approach with a fine-grained control will require creation of Ingress Controller, creation of an Ingress object for your service and applying ingress.kubernetes.io/whitelist-source-range annotation to it.
I have created a kubernetes cluster on Azure. I have deployed some pods where there is no frontend (micro services).
I have performed tests locally using Postman and VS Code: these micro services return either 200 O` or 500.
The problem is that in Kubernetes I have the external IP correctly, but it is impossible for me to access from outside.
I have another Mongo container that I can access without problems.I leave some images to try to solve:
Can you help me? thanks!!
Kubernetes is a little bit more complex than simple docker containers, so it might get confusing to get it running at first. I will explain at which points you need to configure exposure of a service.
Each container has an own ip address space, so each container can use the same port for an application. In your case you might want to use port 6060. This is the port the application needs to bind to and on all network interfaces (ip 0.0.0.0) to be reachable from the outside. This is the port you would declare as EXPOSE in your dockerfile.
When testing locally you can map each container to a different local port for testing: docker run -p external-port:internal-port
The port you use for EXPOSE is the port you configure as containerPort in a Pod or Deployment.
One or many pods are exposed as load balanced service inside kubernetes using a Service. There you might want to map a request port - for http usually 80 - to the container port, in your case 6060.
The service can then be exposed externally using a LoadBalancer. The external IP of the LoadBalancer will be mapped to the (virtual IP) of your Service, the Service maps the request port to the container port and selects an appropriate pod using the selector. The pod contains a container listening on the container port and then replies to your request.
The whole chain must be configured correctly in order to get it working. Keeping it simple (not using different ports for each application) makes it easier to get right.
Did you try to hit restApi URIs like
ExternalIP:Port/uri
This should be accessible, I also use this approach with AKS
As I see from your question and the YAML file in your comment, the possible reason as I think is that you set the command in your deployment container, this command will overwrite the default command in the image. So I doubt maybe your application does not start, you can take a check.
I will also suggest you check if the port you expose to the outside is the same as the port in the image.
I have been able to lift one of the 4 microservices.
I have tried to lift the three remaining microservices with the same YAML (changing the image URL and the port) and these do not work.
The YAML used is this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: permissions
spec:
replicas: 1
template:
metadata:
labels:
app: permissions
spec:
containers:
- name: permissions
image: URL IMAGE
ports:
- containerPort: 6060
imagePullSecrets:
- name: nameimage
---
apiVersion: v1
kind: Service
metadata:
name: permissions
spec:
type: LoadBalancer
ports:
- port: 6060
selector:
app: permissions
I added this to set resource limits for the others Microservices:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: users
spec:
replicas: 1
template:
metadata:
labels:
app: users
spec:
containers:
- name: users
image: URL IMAGE
resources:
requests:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 6061
---
apiVersion: v1
kind: Service
metadata:
name: users
spec:
type: LoadBalancer
ports:
- port: 6061
selector:
app: users
As I said, I could only lift the first one.
Some help?
Thanks!
I'd like to expose default port (1883) and WS port (9001) of MQTT server on an Azure Kubernetes Cluster.
Anyway here is the deployement I currently wrote :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-server
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-server
template:
metadata:
labels:
app: mqtt-server
type: backend
spec:
containers:
- name: mqtt-server
image: eclipse-mosquitto:1.5.4
resources:
requests:
cpu: 250m
memory: 256Mi
ports:
- name: mqtt-dflt-port
containerPort: 1883
- name: mqtt-ws-port
containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-server-service
spec:
selector:
app: mqtt-server
type: LoadBalancer
ports:
- name: mqtt-dflt-port
protocol: TCP
port: 1883
targetPort: 1883
- name: mqtt-ws-port
protocol: TCP
port: 1884
targetPort: 9001
And when I deploy it, everything is fine but the MQTT broker is unreachable and my service is described like that :
mqtt-server-service LoadBalancer 10.0.163.167 51.143.170.64 1883:32384/TCP,1884:31326/TCP 21m
Why is the 1883/9001 port aren't forwarded like it should be ?
First, make sure you’re connecting to the service’s cluster IP from within the
cluster, not from the outside.
Don’t bother pinging the service IP to figure out if the service is accessible
(remember, the service’s cluster IP is a virtual IP and pinging it will never work).
If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the
pod won’t be part of the service.
To confirm that a pod is part of the service, examine the corresponding End-
points object with kubectl get endpoints .
If you’re trying to access the service through its FQDN or a part of it (for exam-
ple, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and
it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
Check whether you’re connecting to the port exposed by the service and not
the target port.
Try connecting to the pod IP directly to confirm your pod is accepting connec-
tions on the correct port.
If you can’t even access your app through the pod’s IP, make sure your app isn’t
only binding to localhost.
I dont see anything wrong, ports you requested are being forwarded to. And service created temporary ports on nodes for traffic to flow (it always does that). Service got endpoints, everything is okay.
Just to give more context, it always does that, because it needs to route traffic to some port, but it cannot depend on this exact port, because it might be occupied, so its using random port from 30.000 range (by default).
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
If you must specify a known and static port assignment, you can add nodePort: some-number to your ports definition in your service. By default, nodeports are assigned in 30000-32767.
I'm trying to redirect an ingress for the service deployed in Azure Kubernetes to https. Whatever I try doesn't work. I tried configuring Ingress and Traefik itself (via ConfigMap) with no effect.
The config for Traefik looks as the following:
---
# Traefik_config.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: kube-system
# traefik.toml
data:
traefik.toml: |
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[frontends]
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
# overrides default entry points
entrypoints = ["http", "https"]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
url = "http://auth.mywebsite.com"
The subject for redirection is containerized IdentityServer API website with no TLS encryption. There are a couple of questions on the matter:
What's the best way to redirect the frontend app in Azure Kubernetes with Traefik
In the config the frontend is numbered, i.e. "frontend2". I assume this a sequential number of the app on the Traefik's dashboard. The problem is, the dashboard only shows the total sum of apps. If there are many of them, how to figure what the number is?
When I apply annotations to the Ingress, like "traefik.ingress.kubernetes.io/redirect-permanent: true" the respective labels are not showing up in the Traefik's dashboard for the respective app. Is there any reason for that?
Your configuration for redirecting http to https looks good. If you have followed the official Doc of Traefik to deploy on kubernetes, The Traefik ingress controller service will not have 443. Make sure you have port 443 opened on the Service with service type as LoadBalancer. Once we open a port in service, Then Azure opens the same port in the Azure load balancer. Service yaml is here.
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
If you want to redirect all the http to https in your cluster, You can go for the redirection in the configuration file.
If you want to redirect only some of the services, then add annotations in the Ingress to achieve redirection for specific services.
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
After setting up the redirection, Traffic Dashboard reflects that here.
You can also set up a permanent rediection using traefik.ingress.kubernetes.io/redirect-permanent: "true
Iv'e stumbled on this question while looking for a solution myself.
We are using traefik as a load balancer and i wanted to add https redirect to an ingress route. To do that I added a https-redirect middleware :
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
namespace: <your-namespace>
spec:
redirectScheme:
scheme: https
permanent: true
The namespace here is important as you need it for the annotation.
You then need to add an annotation to your ingress :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/router.middlewares: <your-namespace>-https-redirect#kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
...
I found the explanation here :
https://community.traefik.io/t/how-to-configure-middleware-with-kubernetes-ingress-middleware-xyz-does-not-exist/5016
I've set-up an AKS cluster and am now trying to connect to it. My deployment YAML is here:
apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
If I run the dashboard, I get this:
Which looks like it should be telling me the external endpoint, but isn't. I have a theory that this is because the Yaml file is only deploying a Pod, which is in some way not able to expose an endpoint - is that the case and if so, why? Otherwise, how can I find this endpoint?
Thats not how that works, you need to read up on basic kubernetes concept. Pods are only container, to expose pods you need to create services (and you need labels), to expose pods externally you need to set service type to LoadBalancer. You probably want to use deployments instead of pods, its a lot easier\reliable.
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
so in short, you need to add labels to your pod and create a service of type load balancer with selectors that match your pods labels
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 443
type: LoadBalancer