I am using following document to implement https on kubernetes deployed application :
https://learn.microsoft.com/en-us/azure/aks/ingress-tls
I am getting "Certificate does not exist" . i have used cluster issuer and "letsencrypt-prod" . i have following certificates :
acme-crt
acme-crt-secret
cert-mgr-webhook-ca
cert-mgr-webhook-webhook-tls
tls-secret
why i am getting "certificate does not exist" when i describe certificate ?
`Name: acme-crt-secret
Namespace: <name-space>
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-07-19T07:41:46Z
Generation: 2
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: starc
UID: <Id>
Resource Version: <version>
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/<name-space>/certificates/acme-crt-secret
UID: <Uid>
Spec:
Acme:
Config:
Domains:
starcapp.com
Http 01:
Ingress:
Ingress Class: nginx
Dns Names:
starcapp.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Secret Name: acme-crt-secret
Status:
Conditions:
Last Transition Time: 2019-07-19T07:41:46Z
Message: Certificate does not exist
Reason: NotFound
Status: False
Type: Ready
Events: <none>`
Try to specify namespace in your certificate configuration file.
Look at example ertificate configuration file:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: tls-secret
namespace: ingress-basic
spec:
secretName: tls-secret-staging
dnsNames:
- demo-aks-ingress.eastus.cloudapp.azure.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- demo-aks-ingress.eastus.cloudapp.azure.com
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
Then exec command:
$ kubectl apply -f your-certificate-filename.yaml
Make sure the secret is in the cert-manager namespace.
Create a certificate manual as well. Once you 'forced' cert-manager to create a certificate, he was good to go en auto created certificates as well.
Related
I have already a google managed SSL certificate created (with dns verification option). I want to use same certificate in my istio-ingress for SSL. Is there any possible annotations available ?
We can create ManagedCertificate resource in GKE, but it is uses the loadbalancer verification option which does not support wildcard certificate.
What to do if I want to create certificate like (*.example.com) and attached it with istio-ingress or gke ingress ?
You can use the cert-manager.io/issuer and cert-manager.io/cluster-issuer annotations to reference your Google-managed SSL certificate in your Istio Ingress configuration.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
annotations:
cert-manager.io/issuer: google
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: # Kubernetes secret that contains your Google-managed SSL
hosts:
- "*.example.com"
Securing Ingress Resources: https://cert-manager.io/docs/usage/ingress/
Here's anotehr solution that should work: https://istio.io/latest/docs/ops/integrations/certmanager/
You can create the wild card certificate with the Cert-manger.
Here is my article on requesting the wild card certificate with DNS verification as it's not supported with HTTP.
https://medium.com/#harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2
For GCP DNS verification you can follow official guide : https://cert-manager.io/docs/configuration/acme/dns01/google/
Once auth is successful you will be able to request the certificate and it will get stored in K8s secret.
create a service account :
PROJECT_ID=myproject-id
gcloud iam service-accounts create dns01-solver --display-name "dns01-solver"
Binding policy :
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:dns01-solver#$PROJECT_ID.iam.gserviceaccount.com \
--role roles/dns.admin
K8s secret :
gcloud iam service-accounts keys create key.json \
--iam-account dns01-solver#$PROJECT_ID.iam.gserviceaccount.com
kubectl create secret generic clouddns-dns01-solver-svc-acct \
--from-file=key.json
issuer
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: gcp-issuer
spec:
acme:
...
solvers:
- dns01:
cloudDNS:
# The ID of the GCP project
project: $PROJECT_ID
# This is the secret used to access the service account
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: key.json
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: le-crt
spec:
secretName: tls-secret
issuerRef:
kind: Issuer
name: letsencrypt-prod
commonName: "*.devops.example.in"
dnsNames:
- "*.devops.example.in"
You can attach this newly auto-created secret to Ingress or Gateway in Istio as per need. That secret will be storing your wild card certificate.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
annotations:
cert-manager.io/issuer: gcp-issuer
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: tls-secret # This should match the Certificate secretName
hosts:
- *.devops.example.in
Hi Kubernetes Experts,
I was using the following ServiceAccount creation config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
and the following Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
Things were working fine, And now I want to make my pod more secure setting automountServiceAccountToken to false.
I changed my ServiceAccount creation and deployment config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
automountServiceAccountToken: false
After setting this my scheduler pod is not coming up and it says CrashLoopBackOff
Error:
I0325 17:37:50.304810 1 flags.go:33] FLAG: --write-config-to=""
I0325 17:37:50.891504 1 serving.go:319] Generated self-signed cert in-memory
W0325 17:37:51.168023 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168064 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0325 17:37:51.168072 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0325 17:37:51.168089 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168102 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
W0325 17:37:51.168111 1 options.go:298] Neither --kubeconfig nor --master was specified. Using default API client. This might not work.
invalid configuration: no configuration has been provided
I believe we need to configure something more along with automountServiceAccountToken: false.
Can someone point me to the additional configurations needed to use automountServiceAccountToken: false?
Configure Service Accounts for Pods
You can access the API from inside a pod using automatically mounted service account credentials.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account or for a particular pod.
So, when you are creating a ServiceAccount and a Deployment like in your example yaml files, credentials for accessing the Kubernetes API are not automatically mounted to the Pod. But your k8s Deployment 'my-scheduler' requires them to access the API.
You can test your ServiceAccount with some dummy Deployment of nginx, for example. And it will work without mounting credentials.
Also, if you create a ServiceAccount like in your example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
You can manually mount the API credentials like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-scheduler
namespace: kube-system
...
spec:
...
template:
metadata:
labels:
app: my-scheduler
spec:
containers:
- image: <YOUR_IMAGE>
imagePullPolicy: Always
name: my-scheduler
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
serviceAccountName: my-scheduler
volumes:
- name: kube-api-access
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
See the Managing Service Accounts link for more information.
I am trying to sync an Azure Key Vault Secret with a Kubernetes Secret of type dockerconfigjson by applying the following yaml manifest with the 4 objects Pod, SecretProviderClass, AzureIdentity and AzureIdentityBinding.
All configuration around key vault access and managed identity RBAC rules have been done and proven to work, as I have access to the Azure Key Vault secret from within the running Pod.
But, when applying this manifest, and according to the documentation here, I expect to see the kubernetes secret regcred reflecting the Azure Key Vault Secret when I create the Pod with mounted secret volume, but the kubernetes secret remains unchanged. I have also tried to recreate the Pod in an attempt to trigger the sync but in vain.
Since this is a very declarative way of configuring this functionality, I am also confused where to look at logs for troubleshooting.
Can someone lead me to what may I be doing wrong?
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
aadpodidbinding: webapp
spec:
containers:
- name: demo
image: mcr.microsoft.com/oss/azure/aad-pod-identity/demo:v1.6.3
volumeMounts:
- name: web-app-secret
mountPath: "/mnt/secrets"
readOnly: true
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: web-app-secret
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: web-app-secret-provide
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: web-app-secret-provide
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: <key-vault-name>
objects: |
array:
- |
objectName: registryPassword
objectType: secret
tenantId: <tenant-id>
secretObjects:
- data:
- key: .dockerconfigjson
objectName: registryPassword
secretName: regcred
type: kubernetes.io/dockerconfigjson
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: kv-managed-identity
spec:
type: 0
resourceID: <resource-id>
clientID: <client-id>
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: kv-managed-binding
spec:
azureIdentity: kv-managed-identity
selector: web-app
I am trying to set up an Azure Kubernetes cluster with an HTTPS ingress controller for separate dev, staging, and prod environments. I have followed the Microsoft Azure guide on how to Create an HTTPS ingress controller on Azure Kubernetes Service (AKS) which allows me to set up an HTTPS ingress controller for a single namespace, but my end goal is to have separate namespaces for the dev, staging, and prod environments. According to the answers to this question, the way to do this is to have the ingress controller on one namespace (ingress in my case), and then separate ingress rules for each namespace (dev in my case).
Hence I setup the nginx ingress controller and the cert-manager on the ingress namespace:
# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
# Label the ingress-basic namespace to disable resource validation
kubectl label namespace ingress cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
--namespace ingress \
--version v0.16.1 \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set webhook.nodeSelector."kubernetes\.io/os"=linux \
--set cainjector.nodeSelector."kubernetes\.io/os"=linux
I then create a cluster-issuer.yml file with the following:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: email#address.com
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
which I apply with
$ kubectl apply -f cluster-issuer.yml
Next I create an ingress rule on the dev namespace with the following ingress.yml file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-dev
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- domain.azure.com
secretName: tls-secret-dev
rules:
- host: domain.azure.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /dev/my-service(/|$)(.*)
and apply it:
$ kubectl apply -f ingress.yml
Now I check to see whether a secret has been created:
$ kubectl get certificate -n dev
NAME READY SECRET AGE
tls-secret-dev False tls-secret-dev 61s
So it seems that something went wrong when creating the secret. If I look at the certificate, it seems that a certificate is requested, but it never gets further than that:
$ kubectl describe certificate tls-secret -n dev
Name: tls-secret-dev
Namespace: dev
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1beta1
Kind: Certificate
...
Status:
Conditions:
Last Transition Time: 2021-02-16T13:47:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2021-02-16T13:47:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: tls-secret-dev-6ngw8
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 70s cert-manager Issuing certificate as Secret does not exist
Normal Generated 70s cert-manager Stored new private key in temporary Secret resource "tls-secret-dev-6ngw8"
Normal Requested 70s cert-manager Created new CertificateRequest resource "tls-secret-dev-vtlbd"
Looking at the certificate request, an order is created:
$ kubectl describe certificaterequest tls-secret-dev-vtlbd -n dev
Name: tls-secret-dev-vtlbd
Namespace: dev
Labels: <none>
Annotations: cert-manager.io/certificate-name: tls-secret-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: tls-secret-dev-6ngw8
API Version: cert-manager.io/v1beta1
Kind: CertificateRequest
...
Status:
Conditions:
Last Transition Time: 2021-02-16T13:47:33Z
Message: Waiting on certificate issuance from order dev/tls-secret-dev-vtlbd-526778456: ""
Reason: Pending
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 3m3s cert-manager Created Order resource dev/tls-secret-dev-vtlbd-526778456
Inspecting the order is where the trail seems to run cold:
$ kubectl describe order tls-secret-dev-vtlbd-526778456 -n dev
Name: tls-secret-dev-vtlbd-526778456
Namespace: dev
Labels: <none>
Annotations: cert-manager.io/certificate-name: tls-secret-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: tls-secret-dev-6ngw8
API Version: acme.cert-manager.io/v1beta1
Kind: Order
...
Status:
Events: <none>
Question: How do I get the certificate manager to stop waiting on certificate issuance so I can finish setting up my HTTPS ingress controller?
I am trying to configure an Ambassador Gateway on Kubernetes with Letsencrypt & cert-manager on Azure.
I am receiving the following errors in the cert-manager logs -
Error getting certificate 'ambassador-certs': secret "ambassador-
certs" not found
certificates controller: Re-queuing item "default/<certificate-name>" due
to error
processing: http-01 self check failed for domain "<certificate-name>"
If I then create the secret in Kubernetes called ambassador-certs it starts to log the following -
Re-queuing item "default/<certificate-name>" due to error processing:
no data for "tls.crt" in secret 'default/ambassador-certs'
My configuration is as follows -
Kubernetes Secret
apiVersion: v1
kind: Secret
metadata:
name: ambassador-certs
namespace: default
type: Opaque
Kubernetes Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: <name>
spec:
secretName: ambassador-certs
commonName: <domain-name>
dnsNames:
- <domain-name>
acme:
config:
- http01:
ingressClass: nginx
domains:
- <domain-name>
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
Kubernetes ClusterIssuer
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
I installed Ambassador as directed from their site -
kubectl apply -f
https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml
WhenI tried this with an Ingress Controller the certificates were created and added to the secrets successfully. What am I missing with Ambassador please?
Finally, according to the Ambassador website this is all I need to do
Certificate Manager
Jetstack's cert-manager lets you easily provision and manage TLS
certificates on Kubernetes. No special configuration is required to use >Ambassador with cert-manager.
Once cert-manager is running and you have successfully created the >issuer, you can request a certificate such as the following:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: cloud-foo-com
namespace: default
spec:
secretName: ambassador-certs
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: cloud.foo.com
dnsNames:
- cloud.foo.com
acme:
config:
- dns01:
provider: clouddns
domains:
- cloud.foo.com
Note the secretName line above. When the certificate has been stored in
the secret, restart Ambasador to pick up the new certificate.
Thank you. Slowly dying inside trying to resolve this :-)
EDIT
I deleted everything and reconfigured firstly with Ambassador using http. That worked. I was able to browse to my httpbin.org route over http successfully. I then switched to port 443 on the Ambassador Service yaml and re-applied all as above.
This is still being logged in the cert-manager logs
Re-queuing item "default/<certificate-name>" due to error processing: no data
for "tls.crt" in secret 'default/ambassador-certs'
kubectl describe secret ambassador-certs
Name: ambassador-certs
Namespace: default
Labels: <none>
Annotations:
Type: Opaque
Data
====
This basically means that the challenge failed .