Azure Kubernetes - Istio accessing grafana, prometheus, jaeger, kiali & envoy externally? - azure

I have used the following configuration to setup the Istio
cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
I want to access the services like grafana, prometheus, jaeger, kiali & envoy externally - eg: https://grafana.mycompany.com, how can I do it?
Update:
I have tried below however it doesn't work
kubectl expose service prometheus --type=LoadBalancer --name=prometheus --namespace istio-system
kubectl get svc prometheus-svc -n istio-system -o json
export PROMETHEUS_URL=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${PROMETHEUS_URL}
curl http://${PROMETHEUS_URL}

I got it working as mentioned below
kubectl expose service prometheus --type=LoadBalancer --name=prometheus --namespace istio-system
export PROMETHEUS_URL=$(kubectl get svc prometheus-svc -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].port}')
echo http://${PROMETHEUS_URL}
curl http://${PROMETHEUS_URL}
I would assume that it may not be the right way of exposing the services. Instead
Create a Istio Gateway point to https://grafana.mycompany.com
Create a Istio Virtual service to redirect the requuest to the above Internal Service

Related

AKS Ingress controller DNS gives 404 error

I have created aks cluster with 2 services exposed using Ingress controller
below is the yml file for ingress controller with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xyz-office-ingress02
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- office01.xyz.com
secretName: tls-office-secret
rules:
- host: office01.xyz.com
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: office-webapp
port:
number: 80
- path: /api/
pathType: Prefix
backend:
service:
name: xyz-office-api
port:
number: 80
kubenctl describe ing
Name: xyz-office-ingress02
Labels: <none>
Namespace: default
Address: <EXTERNAL Public IP>
Ingress Class: <none>
Default backend: <default>
TLS:
tls-office-secret terminates office01.xyz.com
Rules:
Host Path Backends
---- ---- --------
*
/(/|$)(.*) office-webapp:80 (10.244.1.18:80,10.244.2.16:80)
/api/ xyz-office-api:80 (10.244.0.14:8000,10.244.1.19:8000)
Annotations: cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
On IP i am able to access both services, however when using the DNS it is not working and gives 404 error
Cleaning up remarks from comments: basically, the issue is with the ingress rules definition. We have the following:
rules:
- host: office01.xyz.com
- http:
paths:
...
We know connecting to ingress directly does work, without using DNS. While when querying it through DNS: we get a 404.
The reason for this 404 is that, when entering with a DNS name, you enter the first rules. In which you did not define any backend.
One way to fix this would be to relocate the "host" part of that ingress with your http rules, eg:
spec:
tls:
...
rules:
- host: office01.xyz.com
http: #no "-", not a new entry => http & host belong to a single rule
paths:
- path: /(/|$)(.*)
...
- path: /api/
...
I tried to reproduce the same issue in my environment and got the below results
I have created the dns zone for the cluster
Created the namespace
kubectl create namespace ingress-basic
I have installed the helm repo and used the helm to deploy the ingress controller
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace <namespace-name> \
--set controller.replicaCount=2
When I check in the logs I am able to see public IP with load balancer
I have created the some role assignments to connect the DNS zones
Assigned the managed identity of cluster node pool DNS contributor rights to the domain zone
az role assignment create --assignee $UserClientId --role 'DNS Zone Contributor' --scope $DNSID
I have run some helm commands to deploy the dns zones
helm install external-dns bitnami/external-dns --namespace ingress-basic --set provider=azure --set txtOwnerId=<cluster-name> --set policy=sync --set azure.resourceGroup=<rg-name> --set azure.tenantId=<tenant-id> --set azure.subscriptionId=<sub-id> --set azure.useManagedIdentityExtension=true --set azure.userAssignedIdentityID=<UserClient-Id>
I have installed the cert manager using helm
helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \
--version vXXXX
I have created and run the application
vi nginxfile.yaml
kubectl apply -f file.yaml
I have created the ingress route it will configure the traffic to the application
After that we have to verify the certificates it will create or not and wait for few minutes it will update the DNS zone
I have created the cert manager and deployed that cluster
kubectl apply -f file.yaml --namespace ingress-basic
Please find this url for Reference for more details

How we can access kubernetes ingress controller IP using https?

I have deployed application in Azure Kubernetes (AKS). I have used ingress-controller for my POC. Previously I was using domain (saurabh.com). I am able to access saurabh.com through https.
Now what I want is that I want to access my application using IP address with https.
My ingress controller yaml files looks like:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: saurabh-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: saurabh-ui
port:
number: 4200
By doing this, I am able to access my application using IP but its coming http not https. Can someone please help me with this. I want to access my application IP through https.
Note: I have installed the certificates. When I am trying to access domain using saurabh.com, its coming with https.
Thanks in advance.
I tried reproduce the issue in my environment and got the below results
Please use this link to access the files
I have created the namespace
kubectl create namespace namespace_name
Created the applications and deployed into the kubernetes
kubectl apply -f filename.yaml
To check the namespaces which are created and we can get the IP address using below command
kubectl get svc -n namespace_name
I have installed the helm chat for controller and deployed the ingress resource into the kubernetes
NOTE: After installing the nginx controller we have to change the Cluster IP to LoadBalancer.
Here I have enabled HTTPS in AKS using cert manager, it will automatically generate and configured
I have created the namespace for cert manager
kubectl create namespace namespace_name
kubectl get svc namespace namespace_name
I have installed the cert manager using helm using below command
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v0.14.0 \
--set installCRDs=true
To check the cert manager namespace
kubectl get pods --namespace cert-manager
I have created the cluster issuer and deployed
vi filename.yaml
kubectl apply --namespace app -f filename.yaml
I have created and installed the TLS or SSL certificates
kubectl apply --namespace app -f filename.yaml
We can verify that the certificate is created or not using below command
Here it will show the certificate is created or not
kubectl describe cert app-web-cert --namespace namespace_name
Check the service using below command
kubectl get services -n app
Test the app with HTTPS: https://hostname with IP address
Here we can also check the certificates which we have added.

Accessing Kubernetes worker node labels from the Containers/pods

How to access Kubernetes worker node labels from the container/pod running in the cluster?
Labels are set on the worker node as the yaml output of this kubectl command launched against this Azure AKS worker node shows :
$ kubectl get nodes aks-agentpool-39829229-vmss000000 -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2021-10-15T16:09:20Z"
labels:
agentpool: agentpool
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: Standard_DS2_v2
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: eastus
failure-domain.beta.kubernetes.io/zone: eastus-1
kubernetes.azure.com/agentpool: agentpool
kubernetes.azure.com/cluster: xxxx
kubernetes.azure.com/mode: system
kubernetes.azure.com/node-image-version: AKSUbuntu-1804gen2containerd-2021.10.02
kubernetes.azure.com/os-sku: Ubuntu
kubernetes.azure.com/role: agent
kubernetes.azure.com/storageprofile: managed
kubernetes.azure.com/storagetier: Premium_LRS
kubernetes.io/arch: amd64
kubernetes.io/hostname: aks-agentpool-39829229-vmss000000
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
node.kubernetes.io/instance-type: Standard_DS2_v2
storageprofile: managed
storagetier: Premium_LRS
topology.kubernetes.io/region: eastus
topology.kubernetes.io/zone: eastus-1
name: aks-agentpool-39829229-vmss000000
resourceVersion: "233717"
selfLink: /api/v1/nodes/aks-agentpool-39829229-vmss000000
uid: 0241eb22-4d1b-4d65-870f-fcc51dac1c70
Note: The pod/Container that I have is running with non-root access and it doesn't have a privileged user.
Is there a way to access these labels from the worker node itself ?
In the AKS cluster,
Create a namespace like:
kubectl create ns get-labels
Create a Service Account in the namespace like:
kubectl create sa get-labels -n get-labels
Create a Clusterrole like:
kubectl create clusterrole get-labels-clusterrole --resource=nodes --verb=get,list
Create a Rolebinding like:
kubectl create rolebinding get-labels-rolebinding -n get-labels --clusterrole get-labels-clusterrole --serviceaccount get-labels:get-labels
Run a pod in the namespace you craeted like:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: get-labels
namespace: get-labels
spec:
serviceAccountName: get-labels
containers:
- image: centos:7
name: get-labels
command:
- /bin/bash
- -c
- tail -f /dev/null
EOF
Execute a shell in the running container like:
kubectl exec -it get-labels -n get-labels -- bash
Install jq tool in the container:
yum install epel-release -y && yum update -y && yum install jq -y
Set up shell variables:
# API Server Address
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
If you want to get a list of all nodes and their corresponding labels, then use the following command:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes | jq '.items[].metadata | {name,labels}'
else, if you want the labels corresponding to a particular node then use:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes/<nodename> | jq '.metadata.labels'
Please replace <nodename> with the name of node intended.
N.B. You can choose to include the installation of the jq tool in the Dockerfile from which your container image is built and make use of environment variables for the shell variables. We have used neither in this answer in order to explain the working of this method.

Kubernetes cert-manager with letsencrypt waiting on certificate issuance

I am trying to set up an Azure Kubernetes cluster with an HTTPS ingress controller for separate dev, staging, and prod environments. I have followed the Microsoft Azure guide on how to Create an HTTPS ingress controller on Azure Kubernetes Service (AKS) which allows me to set up an HTTPS ingress controller for a single namespace, but my end goal is to have separate namespaces for the dev, staging, and prod environments. According to the answers to this question, the way to do this is to have the ingress controller on one namespace (ingress in my case), and then separate ingress rules for each namespace (dev in my case).
Hence I setup the nginx ingress controller and the cert-manager on the ingress namespace:
# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
# Label the ingress-basic namespace to disable resource validation
kubectl label namespace ingress cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
--namespace ingress \
--version v0.16.1 \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set webhook.nodeSelector."kubernetes\.io/os"=linux \
--set cainjector.nodeSelector."kubernetes\.io/os"=linux
I then create a cluster-issuer.yml file with the following:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: email#address.com
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
which I apply with
$ kubectl apply -f cluster-issuer.yml
Next I create an ingress rule on the dev namespace with the following ingress.yml file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-dev
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- domain.azure.com
secretName: tls-secret-dev
rules:
- host: domain.azure.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /dev/my-service(/|$)(.*)
and apply it:
$ kubectl apply -f ingress.yml
Now I check to see whether a secret has been created:
$ kubectl get certificate -n dev
NAME READY SECRET AGE
tls-secret-dev False tls-secret-dev 61s
So it seems that something went wrong when creating the secret. If I look at the certificate, it seems that a certificate is requested, but it never gets further than that:
$ kubectl describe certificate tls-secret -n dev
Name: tls-secret-dev
Namespace: dev
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1beta1
Kind: Certificate
...
Status:
Conditions:
Last Transition Time: 2021-02-16T13:47:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2021-02-16T13:47:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: tls-secret-dev-6ngw8
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 70s cert-manager Issuing certificate as Secret does not exist
Normal Generated 70s cert-manager Stored new private key in temporary Secret resource "tls-secret-dev-6ngw8"
Normal Requested 70s cert-manager Created new CertificateRequest resource "tls-secret-dev-vtlbd"
Looking at the certificate request, an order is created:
$ kubectl describe certificaterequest tls-secret-dev-vtlbd -n dev
Name: tls-secret-dev-vtlbd
Namespace: dev
Labels: <none>
Annotations: cert-manager.io/certificate-name: tls-secret-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: tls-secret-dev-6ngw8
API Version: cert-manager.io/v1beta1
Kind: CertificateRequest
...
Status:
Conditions:
Last Transition Time: 2021-02-16T13:47:33Z
Message: Waiting on certificate issuance from order dev/tls-secret-dev-vtlbd-526778456: ""
Reason: Pending
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 3m3s cert-manager Created Order resource dev/tls-secret-dev-vtlbd-526778456
Inspecting the order is where the trail seems to run cold:
$ kubectl describe order tls-secret-dev-vtlbd-526778456 -n dev
Name: tls-secret-dev-vtlbd-526778456
Namespace: dev
Labels: <none>
Annotations: cert-manager.io/certificate-name: tls-secret-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: tls-secret-dev-6ngw8
API Version: acme.cert-manager.io/v1beta1
Kind: Order
...
Status:
Events: <none>
Question: How do I get the certificate manager to stop waiting on certificate issuance so I can finish setting up my HTTPS ingress controller?

DNS Record for my Domain not getting propagated on azure-dns with dns-01 challenge

I'm trying to order a certificate with cert-manager for my istio-ingress-gateway. For this i installed istio (1.2.2) on my kubernetes cluster (1.13.7) on AKS including cert-manager. After setting up a clusterissuer and ordering a certificate with a dns-01 challange against my azure-dns zone im getting the following error message in my cert-manager pod. This message gets spammed every ten seconds in the logs:
I0813 14:48:10.597656 1 controller.go:213] cert-manager/controller/challenges "level"=0 "msg"="syncing resource" "key"="istio-system/controller-certificate-531021094-0"
I0813 14:48:10.597940 1 dns.go:112] Checking DNS propagation for "<myurl>.westeurope.cloudapp.azure.com" using name servers: [10.0.0.10:53]
E0813 14:48:10.616908 1 sync.go:180] cert-manager/controller/challenges "msg"="propagation check failed" "error"="DNS record for \"<myurl>.westeurope.cloudapp.a
zure.com\" not yet propagated" "dnsName"="<myurl>.westeurope.cloudapp.azure.com" "resource_kind"="Challenge" "resource_name"="controller-certificate-531021094-0" "res
ource_namespace"="istio-system" "type"="dns-01"
I0813 14:48:10.616976 1 controller.go:219] cert-manager/controller/challenges "level"=0 "msg"="finished processing work item" "key"="istio-system/controller-certificate-53102
1094-0"
I installed istio with the following command:
helm install install/kubernetes/helm/istio --name istio --namespace istio-system \
--values install/kubernetes/helm/istio/values-istio-sds-auth.yaml \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-egressgateway.enabled=false \
--set certmanager.enabled=true \
--set certmanager.email=<myemail> \
--set certmanager.tag=v0.8.1
I tried other cert-manager versions (6 + 8) as well but i got the same results. The seperate cert-manager installation gave me the same results.
This is the yaml file for my issuer...
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: <myEmail>
privateKeySecretRef:
name: istio-ingressgateway-certs-private-key
dns01:
providers:
- name: azure-dns
azuredns:
clientID: <myappID>
clientSecretSecretRef:
key: client-secret
name: azuredns-config
hostedZoneName: <myurl>.westeurope.cloudapp.azure.com
resourceGroupName: <myresourcegroup>
subscriptionID: <mysubID>
tenantID: <mytenantID>
...and for the certificate:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: controller-certificate
namespace: istio-system
spec:
secretName: istio-ingressgateway-certs
issuerRef:
name: letsencrypt-staging
commonName: <myUrl>.westeurope.cloudapp.azure.com
dnsNames:
- <myUrl>.westeurope.cloudapp.azure.com
acme:
config:
- dns01:
provider: azure-dns
domains:
- <myUrl>.westeurope.cloudapp.azure.com
In azure i created a dns zone with the name <myurl>.westeurope.cloudapp.azure.com. Then i created an A record pointing at the istio-ingress-ip exposed by the cluster LoadBalancer. The following commands enables cert-manager to add the TXT entry in the dns zone required by letsencrypt. The first one creates a secret for the issuer and the second one creates a principal to access the dns-zone.
kubectl create secret generic azuredns-config -n istio-system --from-literal=client-secret=<myPW>
az ad sp create-for-rbac --name <myPrincipal>--role="DNS Zone Contributor" --scopes="/subscriptions/<mysubID>/resourceGroups/<myresourcegroup>"
The TXT entry is then successfully created in the dns zone but the certificate is not created as seen in the cert-manager logs above.
Im using https://digwebinterface.com to debug the dns-zone. When i use dig TXT _acme-challenge.myurl.westeurope.cloudapp.azure.com. #mygivennameserver im able to retrieve the acme token. When im trying this whithout the nameserver it is not working. As i understand it correctly this should also work when the propagation is through, right?
I've read that it takes up to 24h for azure to update the dns records. Does this also apply for TXT records?
I tried to enable cert-manager to the nameserver of the dns zone with the following installation. This gave me the same results except that the other nameservers are listed in the cert-manager log. Are there any mistakes in the installation?
helm install \
--name cert-manager \
--namespace istio-system \
--version v0.9.1 \
--set webhook.enabled=false \
--set extraArgs='{--dns01-recursive-nameservers-only,--dns01-self-check-nameservers=8.8.8.8:53\,1.1.1.1:53\,<mynameserver>}' \
jetstack/cert-manager
Running kubectl describe challenge -n istio-system results in:
Name: controller-certificate-531021094-0
Namespace: istio-system
Labels: acme.cert-manager.io/order-name=controller-certificate-531021094
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Challenge
Metadata:
Creation Timestamp: 2019-08-13T14:43:57Z
Finalizers:
finalizer.acme.cert-manager.io
Generation: 4
Owner References:
API Version: certmanager.k8s.io/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: Order
Name: controller-certificate-531021094
UID: c740fea3-bdd8-11e9-80fd-0a58ac1f0fb7
Resource Version: 31205901
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/istio-system/challenges/controller-certificate-531021094-0
UID: c7d72ecf-bdd8-11e9-80fd-0a58ac1f0fb7
Spec:
Authz URL: https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/3805423
Config:
Dns 01:
Provider: azure-dns
Dns Name: <myurl>.westeurope.cloudapp.azure.com
Issuer Ref:
Name: letsencrypt-staging
Key: bSjnfaFTApp6gPNsHc9-dPdmwsTwQJAd73CXmBrVc84
Token: Vn5Z7tBKajxnq1KrOBywP016VauoibCPcYsOESXhV4Q
Type: dns-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/3805423/RTxciA
Wildcard: false
Status:
Presented: true
Processing: true
Reason: Waiting for dns-01 challenge propagation: DNS record for "<myurl>.westeurope.cloudapp.azure.com" not yet propagated
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 52m cert-manager Challenge scheduled for processing
Normal Presented 52m cert-manager Presented challenge using dns-01 challenge mechanism
I was stuck on this very same issue for a while. First of all, I never had to explicitly create certificate resources myself, that is what cert-manager will attempt to do.
On top of that, adding the following annotations to the ingress solved my issue:
cert-manager.io/cluster-issuer: hydrantid
kubernetes.io/tls-acme: 'true'
In my case I am using hydrantid as the issuer, but in the given examplem, it would be letsencrypt.

Resources