I am trying to set up an Azure Kubernetes cluster with an HTTPS ingress controller for separate dev, staging, and prod environments. I have followed the Microsoft Azure guide on how to Create an HTTPS ingress controller on Azure Kubernetes Service (AKS) which allows me to set up an HTTPS ingress controller for a single namespace, but my end goal is to have separate namespaces for the dev, staging, and prod environments. According to the answers to this question, the way to do this is to have the ingress controller on one namespace (ingress in my case), and then separate ingress rules for each namespace (dev in my case).
Hence I setup the nginx ingress controller and the cert-manager on the ingress namespace:
# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
# Label the ingress-basic namespace to disable resource validation
kubectl label namespace ingress cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
--namespace ingress \
--version v0.16.1 \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set webhook.nodeSelector."kubernetes\.io/os"=linux \
--set cainjector.nodeSelector."kubernetes\.io/os"=linux
I then create a cluster-issuer.yml file with the following:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: email#address.com
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
which I apply with
$ kubectl apply -f cluster-issuer.yml
Next I create an ingress rule on the dev namespace with the following ingress.yml file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-dev
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- domain.azure.com
secretName: tls-secret-dev
rules:
- host: domain.azure.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /dev/my-service(/|$)(.*)
and apply it:
$ kubectl apply -f ingress.yml
Now I check to see whether a secret has been created:
$ kubectl get certificate -n dev
NAME READY SECRET AGE
tls-secret-dev False tls-secret-dev 61s
So it seems that something went wrong when creating the secret. If I look at the certificate, it seems that a certificate is requested, but it never gets further than that:
$ kubectl describe certificate tls-secret -n dev
Name: tls-secret-dev
Namespace: dev
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1beta1
Kind: Certificate
...
Status:
Conditions:
Last Transition Time: 2021-02-16T13:47:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2021-02-16T13:47:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: tls-secret-dev-6ngw8
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 70s cert-manager Issuing certificate as Secret does not exist
Normal Generated 70s cert-manager Stored new private key in temporary Secret resource "tls-secret-dev-6ngw8"
Normal Requested 70s cert-manager Created new CertificateRequest resource "tls-secret-dev-vtlbd"
Looking at the certificate request, an order is created:
$ kubectl describe certificaterequest tls-secret-dev-vtlbd -n dev
Name: tls-secret-dev-vtlbd
Namespace: dev
Labels: <none>
Annotations: cert-manager.io/certificate-name: tls-secret-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: tls-secret-dev-6ngw8
API Version: cert-manager.io/v1beta1
Kind: CertificateRequest
...
Status:
Conditions:
Last Transition Time: 2021-02-16T13:47:33Z
Message: Waiting on certificate issuance from order dev/tls-secret-dev-vtlbd-526778456: ""
Reason: Pending
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 3m3s cert-manager Created Order resource dev/tls-secret-dev-vtlbd-526778456
Inspecting the order is where the trail seems to run cold:
$ kubectl describe order tls-secret-dev-vtlbd-526778456 -n dev
Name: tls-secret-dev-vtlbd-526778456
Namespace: dev
Labels: <none>
Annotations: cert-manager.io/certificate-name: tls-secret-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: tls-secret-dev-6ngw8
API Version: acme.cert-manager.io/v1beta1
Kind: Order
...
Status:
Events: <none>
Question: How do I get the certificate manager to stop waiting on certificate issuance so I can finish setting up my HTTPS ingress controller?
Related
I have created aks cluster with 2 services exposed using Ingress controller
below is the yml file for ingress controller with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xyz-office-ingress02
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- office01.xyz.com
secretName: tls-office-secret
rules:
- host: office01.xyz.com
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: office-webapp
port:
number: 80
- path: /api/
pathType: Prefix
backend:
service:
name: xyz-office-api
port:
number: 80
kubenctl describe ing
Name: xyz-office-ingress02
Labels: <none>
Namespace: default
Address: <EXTERNAL Public IP>
Ingress Class: <none>
Default backend: <default>
TLS:
tls-office-secret terminates office01.xyz.com
Rules:
Host Path Backends
---- ---- --------
*
/(/|$)(.*) office-webapp:80 (10.244.1.18:80,10.244.2.16:80)
/api/ xyz-office-api:80 (10.244.0.14:8000,10.244.1.19:8000)
Annotations: cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
On IP i am able to access both services, however when using the DNS it is not working and gives 404 error
Cleaning up remarks from comments: basically, the issue is with the ingress rules definition. We have the following:
rules:
- host: office01.xyz.com
- http:
paths:
...
We know connecting to ingress directly does work, without using DNS. While when querying it through DNS: we get a 404.
The reason for this 404 is that, when entering with a DNS name, you enter the first rules. In which you did not define any backend.
One way to fix this would be to relocate the "host" part of that ingress with your http rules, eg:
spec:
tls:
...
rules:
- host: office01.xyz.com
http: #no "-", not a new entry => http & host belong to a single rule
paths:
- path: /(/|$)(.*)
...
- path: /api/
...
I tried to reproduce the same issue in my environment and got the below results
I have created the dns zone for the cluster
Created the namespace
kubectl create namespace ingress-basic
I have installed the helm repo and used the helm to deploy the ingress controller
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace <namespace-name> \
--set controller.replicaCount=2
When I check in the logs I am able to see public IP with load balancer
I have created the some role assignments to connect the DNS zones
Assigned the managed identity of cluster node pool DNS contributor rights to the domain zone
az role assignment create --assignee $UserClientId --role 'DNS Zone Contributor' --scope $DNSID
I have run some helm commands to deploy the dns zones
helm install external-dns bitnami/external-dns --namespace ingress-basic --set provider=azure --set txtOwnerId=<cluster-name> --set policy=sync --set azure.resourceGroup=<rg-name> --set azure.tenantId=<tenant-id> --set azure.subscriptionId=<sub-id> --set azure.useManagedIdentityExtension=true --set azure.userAssignedIdentityID=<UserClient-Id>
I have installed the cert manager using helm
helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \
--version vXXXX
I have created and run the application
vi nginxfile.yaml
kubectl apply -f file.yaml
I have created the ingress route it will configure the traffic to the application
After that we have to verify the certificates it will create or not and wait for few minutes it will update the DNS zone
I have created the cert manager and deployed that cluster
kubectl apply -f file.yaml --namespace ingress-basic
Please find this url for Reference for more details
I installed the ingress controller on my aks using the helm install. I also created an ingress rule for my service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-demo-ingress
namespace: my-demo
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: mydemoingress.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 8080
When I deployed above ingress rule, i notice that my Backends has no IP as seen below api-gateway:8080 ():
**kubectl describe ing my-demo-ingress -n my-demo**
Name: my-demo-ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: my-demo
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
mydemoingress.com
/(.*) **api-gateway:8080 (<none>)**
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
No IP address gets assigned to the ingress controller.
When i however try this same setup on my local k3s setup, the IP is assigned correctly. Please what am i doing wrong?
Update: Helm install command for ingress controller:
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
• AFAIK, the syntax used for the ‘servicePort’ and the ‘serviceName’ should be as per given in the below sample ‘YAML’ file. Also, the path to the specified service name might be missing as per the YAML file that you have shared due to which while provisioning the service in the AKS cluster, the port mapping might not be correct and hence, the internal load balancer could not reach out to the created service.
Sample YAML file: -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /(.*)
- backend:
serviceName: ingress-demo
servicePort: 80
path: /hello-world-two(/|$)(.*)
• Thus, apart from the above-stated modifications, I would also suggest you to please check whether you have assigned an IP address that is not in use in your virtual network and that you have deployed a load balancer using that IP address in AKS cluster as below: -
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
These modifications should help you resolve your issue with the backend IP address pools.
Also, do refer the below link for more information: -
https://microsoft.github.io/AzureTipsAndTricks/blog/tip253.html
I’m deploying istio in azure kubernetes services (AKS) and I have the following question:
Is it possible to deploy istio using an internal load balancer. Looks like it is deployed in Azure with a public load balancer by default. What do I need to change to make it use an internal load balancer?
To answer the second question :
It is possible to add AKS annotation for an internal load balancer according to AKS documentation:
To create an internal load balancer, create a service manifest named internal-lb.yaml with the service type LoadBalancer and the azure-load-balancer-internal annotation as shown in the following example:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
So You can set this annotation by using helm with the following --set:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set gateways.istio-ingressgateway.serviceAnnotations.'service\.beta\.kubernetes\.io/azure-load-balancer-internal'="true" > aks-istio.yaml
As mentioned in comment You should stick to One question per post as advised here. So I suggest creating second post with other question.
Hope it helps.
Update:
For istioctl You can do the following:
Generate manifest file for Your istio deployment for this example I used demo profile.
istioctl manifest generate --set profile=demo > istio.yaml
Modify the istio.yaml and search for text for type: LoadBalancer.
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
Add the annotation for the internal load balancer like this:
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
After saving changes deploy modified istio.yaml to Your K8s cluster using:
kubectl apply -f istio.yaml
After that You can verify if annotation is present in istio-ingressgateway service.
$ kubectl get svc istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-ingressgateway","istio":"ingressgateway","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15020,"targetPort":15020},{"name":"http2","port":80,"targetPort":80},{"name":"https","port":443},{"name":"kiali","port":15029,"targetPort":15029},{"name":"prometheus","port":15030,"targetPort":15030},{"name":"grafana","port":15031,"targetPort":15031},{"name":"tracing","port":15032,"targetPort":15032},{"name":"tls","port":15443,"targetPort":15443}],"selector":{"app":"istio-ingressgateway"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2020-01-27T13:51:07Z"
Hope it helps.
I'm trying to order a certificate with cert-manager for my istio-ingress-gateway. For this i installed istio (1.2.2) on my kubernetes cluster (1.13.7) on AKS including cert-manager. After setting up a clusterissuer and ordering a certificate with a dns-01 challange against my azure-dns zone im getting the following error message in my cert-manager pod. This message gets spammed every ten seconds in the logs:
I0813 14:48:10.597656 1 controller.go:213] cert-manager/controller/challenges "level"=0 "msg"="syncing resource" "key"="istio-system/controller-certificate-531021094-0"
I0813 14:48:10.597940 1 dns.go:112] Checking DNS propagation for "<myurl>.westeurope.cloudapp.azure.com" using name servers: [10.0.0.10:53]
E0813 14:48:10.616908 1 sync.go:180] cert-manager/controller/challenges "msg"="propagation check failed" "error"="DNS record for \"<myurl>.westeurope.cloudapp.a
zure.com\" not yet propagated" "dnsName"="<myurl>.westeurope.cloudapp.azure.com" "resource_kind"="Challenge" "resource_name"="controller-certificate-531021094-0" "res
ource_namespace"="istio-system" "type"="dns-01"
I0813 14:48:10.616976 1 controller.go:219] cert-manager/controller/challenges "level"=0 "msg"="finished processing work item" "key"="istio-system/controller-certificate-53102
1094-0"
I installed istio with the following command:
helm install install/kubernetes/helm/istio --name istio --namespace istio-system \
--values install/kubernetes/helm/istio/values-istio-sds-auth.yaml \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-egressgateway.enabled=false \
--set certmanager.enabled=true \
--set certmanager.email=<myemail> \
--set certmanager.tag=v0.8.1
I tried other cert-manager versions (6 + 8) as well but i got the same results. The seperate cert-manager installation gave me the same results.
This is the yaml file for my issuer...
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: <myEmail>
privateKeySecretRef:
name: istio-ingressgateway-certs-private-key
dns01:
providers:
- name: azure-dns
azuredns:
clientID: <myappID>
clientSecretSecretRef:
key: client-secret
name: azuredns-config
hostedZoneName: <myurl>.westeurope.cloudapp.azure.com
resourceGroupName: <myresourcegroup>
subscriptionID: <mysubID>
tenantID: <mytenantID>
...and for the certificate:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: controller-certificate
namespace: istio-system
spec:
secretName: istio-ingressgateway-certs
issuerRef:
name: letsencrypt-staging
commonName: <myUrl>.westeurope.cloudapp.azure.com
dnsNames:
- <myUrl>.westeurope.cloudapp.azure.com
acme:
config:
- dns01:
provider: azure-dns
domains:
- <myUrl>.westeurope.cloudapp.azure.com
In azure i created a dns zone with the name <myurl>.westeurope.cloudapp.azure.com. Then i created an A record pointing at the istio-ingress-ip exposed by the cluster LoadBalancer. The following commands enables cert-manager to add the TXT entry in the dns zone required by letsencrypt. The first one creates a secret for the issuer and the second one creates a principal to access the dns-zone.
kubectl create secret generic azuredns-config -n istio-system --from-literal=client-secret=<myPW>
az ad sp create-for-rbac --name <myPrincipal>--role="DNS Zone Contributor" --scopes="/subscriptions/<mysubID>/resourceGroups/<myresourcegroup>"
The TXT entry is then successfully created in the dns zone but the certificate is not created as seen in the cert-manager logs above.
Im using https://digwebinterface.com to debug the dns-zone. When i use dig TXT _acme-challenge.myurl.westeurope.cloudapp.azure.com. #mygivennameserver im able to retrieve the acme token. When im trying this whithout the nameserver it is not working. As i understand it correctly this should also work when the propagation is through, right?
I've read that it takes up to 24h for azure to update the dns records. Does this also apply for TXT records?
I tried to enable cert-manager to the nameserver of the dns zone with the following installation. This gave me the same results except that the other nameservers are listed in the cert-manager log. Are there any mistakes in the installation?
helm install \
--name cert-manager \
--namespace istio-system \
--version v0.9.1 \
--set webhook.enabled=false \
--set extraArgs='{--dns01-recursive-nameservers-only,--dns01-self-check-nameservers=8.8.8.8:53\,1.1.1.1:53\,<mynameserver>}' \
jetstack/cert-manager
Running kubectl describe challenge -n istio-system results in:
Name: controller-certificate-531021094-0
Namespace: istio-system
Labels: acme.cert-manager.io/order-name=controller-certificate-531021094
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Challenge
Metadata:
Creation Timestamp: 2019-08-13T14:43:57Z
Finalizers:
finalizer.acme.cert-manager.io
Generation: 4
Owner References:
API Version: certmanager.k8s.io/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: Order
Name: controller-certificate-531021094
UID: c740fea3-bdd8-11e9-80fd-0a58ac1f0fb7
Resource Version: 31205901
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/istio-system/challenges/controller-certificate-531021094-0
UID: c7d72ecf-bdd8-11e9-80fd-0a58ac1f0fb7
Spec:
Authz URL: https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/3805423
Config:
Dns 01:
Provider: azure-dns
Dns Name: <myurl>.westeurope.cloudapp.azure.com
Issuer Ref:
Name: letsencrypt-staging
Key: bSjnfaFTApp6gPNsHc9-dPdmwsTwQJAd73CXmBrVc84
Token: Vn5Z7tBKajxnq1KrOBywP016VauoibCPcYsOESXhV4Q
Type: dns-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/3805423/RTxciA
Wildcard: false
Status:
Presented: true
Processing: true
Reason: Waiting for dns-01 challenge propagation: DNS record for "<myurl>.westeurope.cloudapp.azure.com" not yet propagated
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 52m cert-manager Challenge scheduled for processing
Normal Presented 52m cert-manager Presented challenge using dns-01 challenge mechanism
I was stuck on this very same issue for a while. First of all, I never had to explicitly create certificate resources myself, that is what cert-manager will attempt to do.
On top of that, adding the following annotations to the ingress solved my issue:
cert-manager.io/cluster-issuer: hydrantid
kubernetes.io/tls-acme: 'true'
In my case I am using hydrantid as the issuer, but in the given examplem, it would be letsencrypt.
I am trying to configure an Ambassador Gateway on Kubernetes with Letsencrypt & cert-manager on Azure.
I am receiving the following errors in the cert-manager logs -
Error getting certificate 'ambassador-certs': secret "ambassador-
certs" not found
certificates controller: Re-queuing item "default/<certificate-name>" due
to error
processing: http-01 self check failed for domain "<certificate-name>"
If I then create the secret in Kubernetes called ambassador-certs it starts to log the following -
Re-queuing item "default/<certificate-name>" due to error processing:
no data for "tls.crt" in secret 'default/ambassador-certs'
My configuration is as follows -
Kubernetes Secret
apiVersion: v1
kind: Secret
metadata:
name: ambassador-certs
namespace: default
type: Opaque
Kubernetes Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: <name>
spec:
secretName: ambassador-certs
commonName: <domain-name>
dnsNames:
- <domain-name>
acme:
config:
- http01:
ingressClass: nginx
domains:
- <domain-name>
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
Kubernetes ClusterIssuer
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
I installed Ambassador as directed from their site -
kubectl apply -f
https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml
WhenI tried this with an Ingress Controller the certificates were created and added to the secrets successfully. What am I missing with Ambassador please?
Finally, according to the Ambassador website this is all I need to do
Certificate Manager
Jetstack's cert-manager lets you easily provision and manage TLS
certificates on Kubernetes. No special configuration is required to use >Ambassador with cert-manager.
Once cert-manager is running and you have successfully created the >issuer, you can request a certificate such as the following:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: cloud-foo-com
namespace: default
spec:
secretName: ambassador-certs
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: cloud.foo.com
dnsNames:
- cloud.foo.com
acme:
config:
- dns01:
provider: clouddns
domains:
- cloud.foo.com
Note the secretName line above. When the certificate has been stored in
the secret, restart Ambasador to pick up the new certificate.
Thank you. Slowly dying inside trying to resolve this :-)
EDIT
I deleted everything and reconfigured firstly with Ambassador using http. That worked. I was able to browse to my httpbin.org route over http successfully. I then switched to port 443 on the Ambassador Service yaml and re-applied all as above.
This is still being logged in the cert-manager logs
Re-queuing item "default/<certificate-name>" due to error processing: no data
for "tls.crt" in secret 'default/ambassador-certs'
kubectl describe secret ambassador-certs
Name: ambassador-certs
Namespace: default
Labels: <none>
Annotations:
Type: Opaque
Data
====
This basically means that the challenge failed .