How to forward request from cloudfront to istio host - amazon-cloudfront

I am facing some challenge to forward request from Cloudfront to istio. I have a service running with istio-gateway host configured as
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: <name>
namespace: <namespace>
spec:
gateways:
- <gateway>
hosts:
- <host>
http:
- corsPolicy:
allowCredentials: true
allowHeaders:
.........
..........
I also have Cloudfront being configured for my UI. I want to make an URL relative to my UI so, I have configured some behavior as /login which I want to forward to this above host. However, it is not able to forward the request.
Notes:
I tried matching the TLS protocol both side
matched ACM certificate entry as well
but still getting 404 on this. Any help would be greatly appreciated.

Related

nginx Ingress controller on Kubernetes not able to access

I installed nginx ingress controller on AKS cluster.But not able to access ingress endpoints that are exposed by our app.As per the initial analysis we see that ingress endpoints have been assigned external IP of one of the node where as ingress controller service has different IP.
What I am doing wrong ?
$kubectl get pods --all-namespaces | grep ingress
kube-system ingress-nginx-58ftggg-4xc56 1/1 Running
$kubectl get svc
kubernetes CLUSTERIP 172.16.0.1 none(ExternalIP) 443/TCP
$kubectl get ingress
vault-ingress-documentation 10.145.13.456
$kubectl describe ingress vault-ingress-documentation
Name:vault-ingress-documentation
Namespace:corebanking
Address:10.145.13.456
Default backend:default-http-backend:80 (<error:default-http-backend:80 not found)
$kubectl get services -n kube-system | grep ingress
ingress-nginx Loadbalancer 172.16.160.33 10.145.13.456 80:30389/TCP,443:31812/TCP
I tried to reproduce the same in my environment and got below results:
I have created deployment so it will create replica sets and pods for the particular nodes and we can see the pods are up and running like below:
Kubectl create -f deployment.yaml
I have created the deployment for the service file it will access inside of the cluster IP:
Kubectl create -f service.yaml
To expose the external IP of the application, I created the ingress rules using Ingress.yaml file.
I have added the annotation class for the ingress.yaml file like below:
annotations:
kubernetes.io/ingress.class: nginx
Here ingress rules will be created with empty address like below:
When I try to access the application, I am not able to access. To access the application, add loadBalancer IP with Domain name in /etc/host path.
Now I am able to connect the application with service IP, but I am not able to expose to the external IP.
To expose to the external IP, I added annotation class like below:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
After that I have applied the changes in the ingressrules.yaml file:
kubectl replace –force -f ingress_rules.yaml
OR
Kubectl create -f ingress_rules.yaml
Now I am able to see the address of the IP, by using this I am able to access the application.

DNS doesn't remove not ready pod in AKS with Azure CNI enabled

How does AKS make not ready pod unavailable to accept requests into it? It only works if you have a service in front of that deployment correct?
I'd like to start this off by trying to explain what I had noticed in aks that is not configured with azure cni and then go on to explain what I have been seeing in aks with azure cni enabled.
In AKS without cni enabled if I execute a curl on url on a not ready pod behind a service like this curl -I some-pod.some-service.some-namespace.svc.cluster.local:8080 what I get in the response is unresolvable hostname or something like that. Which means in my understanding that DNS doesn't have this entry. This is how in normal way aks handles not ready pods to not receives requests.
In AKS with azure cni enabled if I execute the same request on a not ready pod it is able to resolve the hostname and able to send request into the pod. However, there's one caveat is that when I try to execute a request through external private ip of that service that request doesn't reach the not ready pod which that is to be expected and seems to work right. But again when I try to execute a request like I mentioned above curl -I some-pod.some-service.some-namespace.svc.cluster.local:8080 that works but it shouldn't. Why does DNS in the case of azure cni have that value?
Is there anything I can do to configure azure cni to behave more like a default behavior of AKS where a curl request like that either will not resolve that hostname or will refuse the connection or something?
Assuming that not ready pod refer to pods with Readiness Probe failing. The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers. [Reference]
However, the logic determining the readiness of the pod might or might not have anything to do with whether the pod can serve requests and depends completely on the user.
For instance with a Pod having the following manifest:
apiVersion: v1
kind: Pod
metadata:
labels:
test: readiness
name: readiness-pod
spec:
containers:
- name: readiness-container
image: nginx
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
readiness is decided based on the existence of the file /tmp/healthy irrespective of whether nginx serves the application. So on running the application and exposing it using a service readiness-svc on k run -:
kubectl exec readiness-pod -- /bin/bash -c 'if [ -f /tmp/healthy ]; then echo "/tmp/healthy file is present";else echo "/tmp/healthy file is absent";fi'
/tmp/healthy file is absent
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
readiness-pod 0/1 Running 0 11m 10.240.0.28 aks-nodepool1-29819654-vmss000000 <none> <none>
source-pod 1/1 Running 0 6h8m 10.240.0.27 aks-nodepool1-29819654-vmss000000 <none> <none>
kubectl describe svc readiness-svc
Name: readiness-svc
Namespace: default
Labels: test=readiness
Annotations: <none>
Selector: test=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.23.194
IPs: 10.0.23.194
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints:
Session Affinity: None
Events: <none>
kubectl exec -it source-pod -- bash
root#source-pod:/# curl -I readiness-svc.default.svc.cluster.local:80
curl: (7) Failed to connect to readiness-svc.default.svc.cluster.local port 80: Connection refused
root#source-pod:/# curl -I 10-240-0-28.default.pod.cluster.local:80
HTTP/1.1 200 OK
Server: nginx/1.21.3
Date: Mon, 13 Sep 2021 14:50:17 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 07 Sep 2021 15:21:03 GMT
Connection: keep-alive
ETag: "6137835f-267"
Accept-Ranges: bytes
Thus, we can see that when we try to connect from source-pod to the service readiness-svc.default.svc.cluster.local on port 80, connection is refused. This is because the kubelet did not find the /tmp/healthy file in the readiness-pod container to perform a cat operation, consequently marking the Pod readiness-pod not ready to serve traffic and removing it from the backend of the Service readiness-svc. However, the nginx server on the pod can still serve a web application and it will continue to do so if you connect directly to the pod.
Readiness probe failures of containers do not remove the DNS records of Pods. The DNS records of a Pod shares its lifespan with the Pod itself.
This behavior is characteristic of Kubernetes and does not change with network plugins. We have attempted to reproduce the issue and have observed same behavior with AKS clusters using kubenet and Azure CNI network plugins.

kubernetes network policy to disable all internet connections for specific namespace on AWS - EKS

I wanna deny the container from accessing the public internet.
After a long research, I found this example: DENY external egress traffic :
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: foo-deny-external-egress
spec:
podSelector:
matchLabels:
app: foo
policyTypes:
- Egress
egress:
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
to:
- namespaceSelector: {}
But it does not work. Indeed, I ran wget https://google.com and I got a positive response.
Any hint is apperciated
The network policy works fine on calico.
Network Policy has no effect on cluster using flannel network plugin.
As mentioned on this link Flannel is focused on networking. For network policy, other projects such as Calico can be used.
Network policy blocks traffic as expected on cluster using calico
I have two clusters one using flannel and one using calico and test works as expected on calico.
Logs :
$ kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/foo-deny-external-egress created
ubuntu#calico-master-1:~$ kubectl run busybox --image=busybox --restart=Never -- sleep 3600
pod/busybox created
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
foo-deny-external-egress run=busybox 30m
$ kubectl exec -it busybox -- /bin/sh
/ # wget https://google.com
Connecting to google.com (74.125.193.102:443)
wget: can't connect to remote host (74.125.193.102): Connection timed out
On cluster using Flannel creating network policy has no effect
ubuntu#k8s-flannel:~$ kubectl exec -it busybox -- /bin/sh
/ # curl https://google.com
/bin/sh: curl: not found
/ # wget https://google.com
Connecting to google.com (216.58.207.238:443)
wget: note: TLS certificate validation not implemented
Connecting to www.google.com (172.217.20.36:443)
saving to 'index.html'
index.html 100% |*************************************************************************************************************************************************| 12498 0:00:00 ETA
'index.html' saved
Apply policy
$ kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/foo-deny-external-egress created
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
foo-deny-external-egress run=busybox 32m
We can still get to network outside as network policy does not work on flannel.
$ kubectl exec -it busybox -- /bin/sh
/ # wget https://google.com
Connecting to google.com (172.217.22.174:443)
wget: note: TLS certificate validation not implemented
Connecting to www.google.com (172.217.20.36:443)
saving to 'index.html'
index.html 100% |*************************************************************************************************************************************************| 12460 0:00:00 ETA
'index.html' saved
The default CNI on EKS doesn't support network policies. You should either
Install calico as described in https://docs.aws.amazon.com/eks/latest/userguide/calico.html or
Use security group policies for pods https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html

Azure Application Gateway, ISTIO and TLS

I am dealing with the SSL connection from the Azure Web Appplication Firewall to the Kubernetes Cluster via ISTIO.
The connection from the client to the Azure WAF is already TLS encrypted.
As far as I understand, I have to encrypt the data again in the WAF. Can I use the same certificates that I already used for the connection to the WAF?
Here I would proceed as described in this article:
application-gateway-end-to-end-ssl-powershell
Then I have to deposit the same certificates in Istios Ingress Gateway.
As mentioned here:
Configure a TLS ingress gateway
> cat <<EOF | kubectl apply -f -
>
>
> apiVersion: networking.istio.io/v1alpha3
> kind: Gateway
> metadata:
> name: mygateway
> spec:
> selector:
> istio: ingressgateway # use istio default ingress gateway
> servers:
> - port:
> number: 443
> name: https
> protocol: HTTPS
> tls:
> mode: SIMPLE
> serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
> privateKey: /etc/istio/ingressgateway-certs/tls.key
> hosts:
> - "httpbin.example.com"
> EOF
Is that correct so far?
you need to use the same certificate you specified in the application gateway (so the certificate application gateway expects) in the istio gateway. your gateway configuration looks valid, as long as the cert is the same and host is the same.
For me, finally it worked.
I have a situation, Application Gateway deployed with its own VirtualNetwork and Subnet.
So i made Vnet Peering and thought it would be enough. But it didn't.
After some days of struggling, i have found out my VirtualNetwork Subnet is the same as docker network inside AKS.
When i have recreated ApplicationGateway with new subnet, which does not cover any part of docker subnet, it worked.

How to enable TLS SSL Https in Azure service fabric mesh for Asp.net core application

I am deploying a new mesh app with an Asp.Net API core container image. I am able to successfully deploy and access the API using http://[]:80. I used the Following configuration in gateway yaml file:
http:
- name: BenApiHTTP
port: 80
hosts:
- name: "*"
routes:
- name: benapi
match:
path:
value: "/benapiservice/"
rewrite: "/"
type: Prefix
destination:
applicationName: BenApplication
serviceName: BenApi
endpointName: BenApiListener
Now I want to use my own ssl certificate and wants to enable the same api using https i.e. on port 443. So my questions are:
How to upload the ssl certificate in mesh?
What Yaml updates I have to make in gateway.yaml ?
How to set the FQDN for the url ?

Resources