I am trying to filter all incoming requests based on a jwt token to the spring-boot application.
I have the below configuration of OPA, but still my spring boot application not getting the fields added in OPA configuration.
I am excepting those fields in spring boot based on that I will take action.
Am I missing anything.. please help me to solve this.
if anyone has working version of OPA please do share
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: opa-test
spec:
httpPipeline:
handlers:
- name: opa-policy
type: middleware.http.opa
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: opa-policy
spec:
type: middleware.http.opa
version: v1
metadata:
- name: includedHeaders
value: "x-my-custom-header, x-jwt-header"
- name: defaultStatus
value: 403
# `rego` is the open policy agent policy to evaluate. required
# The policy package must be http and the policy must set data.http.allow
- name: rego
value: |
package http
default allow = true
allow = {
"status_code": 301,
"additional_headers": {
"location": "https://my.site/authorize"
}
} {
not jwt.payload["my-claim"]
}
# You can also allow the request and add additional headers to it:
allow = {
"allow": true,
"additional_headers": {
"x-my-claim": my_claim
}
} {
my_claim := jwt.payload["my-claim"]
}
jwt = { "payload": payload } {
auth_header := input.request.headers["Authorization"]
[_, jwt] := split(auth_header, " ")
[_, payload, _] := io.jwt.decode(jwt)
}
Related
I have an AKS cluster with a Node.js server connecting to a Neo4j-standalone instance all deployed with Helm.
I installed an ingress-nginx controller, referenced a default Let's Encrypt certificate and habilitated TPC ports with Terraform as
resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
# repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "tcp.7687"
value = "default/cluster:7687"
}
set {
name = "tcp.7474"
value = "default/cluster:7474"
}
set {
name = "tcp.7473"
value = "default/cluster:7473"
}
set {
name = "tcp.6362"
value = "default/cluster-admin:6362"
}
set {
name = "tcp.7687"
value = "default/cluster-admin:7687"
}
set {
name = "tcp.7474"
value = "default/cluster-admin:7474"
}
set {
name = "tcp.7473"
value = "default/cluster-admin:7473"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
I then have an Ingress with paths pointing to Neo4j services so on https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ I can get to the browser.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
# nginx.ingress.kubernetes.io/rewrite-target: /
# certmanager.k8s.io/acme-challenge-type: http01
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
ingress.kubernetes.io/ssl-redirect: "true"
# kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- xxxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
# - host: xxx.westeurope.cloud.app.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
# 502 bad gateway
# /any character 502 bad gatway
- path: /neo4j-tcp-bolt(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
# /browser/ show browser
#/any character shows login to xxx.westeurope.cloudapp.azure.com:443 from https, :80 from http
- path: /neo4j-tcp-http(/|$)(.*)
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-http
number: 7474
- http:
paths:
- path: /neo4j-tcp-https(/|$)(.*)
# 502 bad gateway
# /any character 502 bad gatway
pathType: Prefix
backend:
service:
# neo4j chart
# name: cluster
# neo4j-standalone chart
name: neo4j
port:
# name: tcp-https
number: 7473
I can get to the Neo4j Browser on the https://xxx.westeurope.cloudapp.azure.com/neo4j-tcp-http/browser/ but using the Connect Url bolt+s//server.bolt it won't connect to the server with the error ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver..
Now I'm guessing that is because Neo4j bolt connector is not using the Certificate used by the ingress-nginxcontroller.
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe secret tls-secret
Name: tls-secret
Namespace: default
Labels: controller.cert-manager.io/fao=true
Annotations: cert-manager.io/alt-names: xxx.westeurope.cloudapp.azure.com
cert-manager.io/certificate-name: tls-certificate
cert-manager.io/common-name: xxx.westeurope.cloudapp.azure.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group:
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-issuer
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 5648 bytes
tls.key: 1679 bytes
I tried to use it overriding the chart values, but then the Neo4j driver from Node.js server won't connect to the server ..
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret # we set up the template to grab `private.key` from this secret
subPath: tls.key # we specify the privateKey value name to get from the secret
publicCertificate:
secretName: tls-secret # we set up the template to grab `public.crt` from this secret
subPath: tls.crt # we specify the publicCertificate value name to get from the secret
trustedCerts:
sources: [ ] # a sources array for a projected volume - this allows someone to (relatively) easily mount multiple public certs from multiple secrets for example.
revokedCerts:
sources: [ ] # a sources array for a projected volume
https:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
Is there a way to use it or should I setup another certificate just for Neo4j? If so what would it be the dnsNames to set on it?
Is there something else I'm doing wrong?
Thank you very much.
From what I can gather from your information, the problem seems to be that you're trying to expose the bolt port behind an ingress. Ingresses are implemented as an L7 (protocol aware) reverse proxy and manage load-balancing etc. The bolt protocol has its load balancing and routing for cluster applications. So you will need to expose the network service directly for every instance of neo4j you are running.
Check out this part of the documentation for more information:
https://neo4j.com/docs/operations-manual/current/kubernetes/accessing-neo4j/#access-outside-k8s
Finally after a few days of going in circles I found what the problems were..
First using a Staging certificate will cause Neo4j bolt connection to fail, as it's not Trusted, with error:
ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
found here https://grishagin.com/neo4j/2022/03/29/neo4j-websocket-issue.html
Then I was missing to assign a general listening address to the bolt connector as it's listening by default only to 127.0.0.0:7687 https://neo4j.com/docs/operations-manual/current/configuration/connectors/
To listen for Bolt connections on all network interfaces (0.0.0.0)
so I added server.bolt.listen_address: "0.0.0.0:7687" to Neo4j chart values config.
Next, as I'm connecting the default neo4j ClusterIP service tcp ports to the ingress controller's exposed TCP connections through the Ingress as described here https://neo4j.com/labs/neo4j-helm/1.0.0/externalexposure/ as an alternative to using a LoadBalancer, the Neo4j LoadBalancer services is not needed so the services:neo4j:enabled gets set to "false", in my tests I actually found that if you leave it enabled bolt won't connect despite setting everything correctly..
Other Neo4j missing config where server.bolt.enabled : "true", server.bolt.tls_level: "REQUIRED", dbms.ssl.policy.bolt.client_auth: "NONE" and dbms.ssl.policy.bolt.enabled: "true" the complete list of config options is here https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
Neo4j chart's values for ssl config were fine.
So now I can use the (renamed for brevity) path /neo4j/browser/ to serve the Neo4j Browser app, and either the /bolt path as the browser Connect URL, or PublicIP's <DSN>:<bolt port>.
You are connected as user neo4j
to bolt+s://xxxx.westeurope.cloudapp.azure.com/bolt
Connection credentials are stored in your web browser.
Hope this explanation and the code recap below will help others.
Cheers.
ingress controller
resource "helm_release" "nginx" {
name = "ingress-nginx"
namespace = "default"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
set {
name = "version"
value = "4.4.2"
}
### expose tcp connections for neo4j service
### bolt url connection port
set {
name = "tcp.7687"
value = "default/neo4j:7687"
}
### http browser app port
set {
name = "tcp.7474"
value = "default/neo4j:7474"
}
set {
name = "controller.extraArgs.default-ssl-certificate"
value = "default/tls-secret"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-dns-label-name"
value = "xxx.westeurope.cloudapp.azure.com"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes/cluster-issuer: letsencrypt-issuer
spec:
ingressClassName: nginx
tls:
- hosts:
- xxx.westeurope.cloudapp.azure.com
secretName: tls-secret
rules:
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
##### Neo4j
- http:
paths:
- path: /bolt(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-bolt
number: 7687
- http:
paths:
- path: /neo4j(/|$)(.*)
pathType: Prefix
backend:
service:
name: neo4j
port:
# name: tcp-http
number: 7474
Values.yaml (Umbrella chart)
neo4j-db: #chart dependency alias
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "xxxx" # this will be the label: app: value for the service selector
password: "xxxxx"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser, this will create "cluster-neo4j" svc
neo4j:
enabled: false
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ] # a sources array for a projected volume
When configuring Slack Alerts in AlertmanagerConfig, I am getting following error (when releasing helm chart on Kubernetes cluster)
Error: UPGRADE FAILED: error validating "": error validating data:
ValidationError(AlertmanagerConfig.spec.receivers[0]): unknown field
"slack_configs" in
com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers
My alertmanagerconfig.yaml file looks as follows:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: {{ template "theresa.fullname" . }}-alertmanager-config
labels:
alertmanagerConfig: email-notifications
spec:
route:
receiver: 'slack-email'
receivers:
- name: 'slack-email'
slack_configs:
- channel: '#cmr-orange-alerts'
api_url: ..
send_resolved: true
icon_url: ..
title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
text: ..
You are trying to create a k8s resource of kind: AlertmanagerConfig but use the syntax of Prometheus config file, not a resource's syntax.
Check the syntax here:
https://docs.openshift.com/container-platform/4.7/rest_api/monitoring_apis/alertmanagerconfig-monitoring-coreos-com-v1alpha1.html
I want to add xray to my Fargate service. Everything works (synth/deploy) but in the logs I'am seeing the following error:
2022-02-07T13:38:22Z [Error] Sending segment batch failed with:
AccessDeniedException: 2022-02-07 14:38:22status code: 403, request
id: cdc23f61-5c2e-4ede-8bda-5328e0c8ac8f
The user I'am using to deploy the application has the AWSXrayFullAccess permission.
Do I have to grant the task the permission manually? If so how?
Here is a snippet of the application:
const cdk = require('#aws-cdk/core');
const ecs = require('#aws-cdk/aws-ecs');
const ecsPatterns = require('#aws-cdk/aws-ecs-patterns');
class API extends cdk.Stack {
constructor(parent, id, props) {
super(parent, id, props);
this.apiXRayTaskDefinition = new ecs.FargateTaskDefinition(this, 'apixRay-definition', {
cpu: 256,
memoryLimitMiB: 512,
});
this.apiXRayTaskDefinition.addContainer('api', {
image: ecs.ContainerImage.fromAsset('./api'),
environment: {
"QUEUE_URL": props.queue.queueUrl,
"TABLE": props.table.tableName,
"AWS_XRAY_DAEMON_ADDRESS": "0.0.0.0:2000"
},
logging: ecs.LogDriver.awsLogs({ streamPrefix: 'api' }),
}).addPortMappings({
containerPort: 80
})
this.apiXRayTaskDefinition.addContainer('xray', {
image: ecs.ContainerImage.fromRegistry('public.ecr.aws/xray/aws-xray-daemon:latest'),
logging: ecs.LogDriver.awsLogs({ streamPrefix: 'xray' }),
}).addPortMappings({
containerPort: 2000,
protocol: ecs.Protocol.UDP,
});
// API
this.api = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'api', {
cluster: props.cluster,
taskDefinition: this.apiXRayTaskDefinition,
desiredCount: 2,
cpu: 256,
memory: 512,
createLogs: true
})
props.queue.grantSendMessages(this.api.service.taskDefinition.taskRole);
props.table.grantReadWriteData(this.api.service.taskDefinition.taskRole);
}
}
The user I'am using to deploy the application has the AWSXrayFullAccess permission.
This is irrelevant, the task will not get all the rights of the user that deploys the stack.
Yes, you need to add the required permissions to the task with
this.apiXRayTaskDefinition.taskRole.addManagedPolicy(
iam.ManagedPolicy.fromAwsManagedPolicyName('AWSXRayDaemonWriteAccess')
);
References:
AWS managed policy with required access for the X-Ray daemon: https://docs.aws.amazon.com/xray/latest/devguide/security_iam_id-based-policy-examples.html#xray-permissions-managedpolicies
Import an AWS-managed policy: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-iam.ManagedPolicy.html#static-fromwbrawswbrmanagedwbrpolicywbrnamemanagedpolicyname
Access the task role: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-ecs.FargateTaskDefinition.html#taskrole-1
Add a policy: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-iam.IRole.html#addwbrmanagedwbrpolicypolicy
I have a set of environment variables in my deployment using EnvFrom and configMapRef. The environment variables held in these configMaps were set by kustomize originally from json files.
spec.template.spec.containers[0].
envFrom:
- secretRef:
name: eventstore-login
- configMapRef:
name: environment
- configMapRef:
name: eventstore-connection
- configMapRef:
name: graylog-connection
- configMapRef:
name: keycloak
- configMapRef:
name: database
The issue is that it's not possible for me to access the specific environment variables directly.
Here is the result of running printenv in the pod:
...
eventstore-login={
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
}
evironment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
eventstore={
"EVENT_STORE_HOST": "eventstore-cluster",
"EVENT_STORE_PORT": "1113"
}
graylog={
"GRAYLOG_HOST":"",
"GRAYLOG_SERVICE_PORT_GELF_TCP":""
}
...
This means that from my nodejs app I need to do something like this
> process.env.graylog
'{\n "GRAYLOG_HOST":"",\n "GRAYLOG_SERVICE_PORT_GELF_TCP":""\n}\n'
This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:
process.env.GRAYLOG_HOST
To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:
env:
- name: NODE_ENV
value: dev
- name: EVENT_STORE_HOST
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_HOST
- name: EVENT_STORE_PORT
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_PORT
- name: KEYCLOAK_REALM_PUBLIC_KEY
valueFrom:
configMapKeyRef:
name: keycloak-local
key: KEYCLOAK_REALM_PUBLIC_KEY
Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.
Short answer:
You will need to define variables explicitly or change configmaps so they have 1 environment variable = 1 value structure, this way you will be able to refer to them using envFrom. E.g.:
"apiVersion": "v1",
"data": {
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
},
"kind": "ConfigMap",
More details
Configmaps are key-value pairs that means for one key there's only one value, configmaps can get string as data, but they can't work with map.
I tried edited manually the configmap to confirm the above and got following:
invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"
This is the reason why environment comes up as one string instead of structure.
For example this is how configmap.json looks:
$ kubectl describe cm test2
Name: test2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.json:
----
environment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
And this is how it's stored in kubernetes:
$ kubectl get cm test2 -o json
{
"apiVersion": "v1",
"data": {
"test.json": "evironment={\n \"LOTUS_ENV\":\"dev\",\n \"DEV_ENV\":\"dev\"\n}\n"
},
In other words observed behaviour is expected.
Useful links:
ConfigMaps
Configure a Pod to Use a ConfigMap
I am struggling to get the azureIdentity for ExternalDNS bound and get DNS entries into our zone(s).
Key error: I0423 19:27:52.830107 1 mic.go:610] No AzureIdentityBinding found for pod default/external-dns-84dcc5f68c-cl5h5 that matches selector: external-dns. it will be ignored
Also, no azureAssignedIdentity is created since there is no match for the pod and selector/aadpodidbinding.
I'm building IaaC using Terraform, Helm, Azure, Azure AKS, VSCODE, and so far, three Kubernetes add-ons - aad pod identity, application-gateway-kubernetes-ingress, and Bitnami external-dns.
Since the identity isn't being bound, an azureAssignedIdentity isn't being created and ExternalDNS isn't able to put records into our DNS zone(s).
The names and aadpodidbindings seem correct. I've tried passing in fullnameOverride in the Terraform kubectl_manifest provider for the Helm install of Bitnami ExternalDNS. I've tried suppressing the suffixes on ExternalDNS names and labels. I've tried editing the Helm and Kubernetes YAML on the cluster itself to try to force a binding. I've tried using the AKS user managed identity which is used for AAD Pod Identity and is located in the cluster's nodepools resource group. I've tried letting the Bitnami ExternalDNS configure and add an azure.json file, and I've also done so manually prior to adding and installing ExternalDNS. I've tried assigning the managed identity to the VMSS of the AKS cluster.
Thanks!
JBP
PS C:\Workspace\tf\HelmOne> kubectl logs pod/external-dns-84dcc5f68c-542mv
: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod default/external-dns-84dcc5f68c-542mv in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors\n"
time="2021-04-24T19:57:30Z" level=debug msg="Retrieving Azure DNS zones for resource group: one-hi-sso-dnsrg-tf."
time="2021-04-24T20:06:02Z" level=error msg="azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/8fb55161-REDACTED-3400b5271a8c/resourceGroups/one-hi-sso-dnsrg-tf/providers/Microsoft.Network/dnsZones?api-version=2018-05-01: StatusCode=404 -- Original Error: adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod default/external-dns-84dcc5f68c-542mv in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors\n"
time="2021-04-24T20:06:02Z" level=debug msg="Retrieving Azure DNS zones for resource group: one-hi-sso-dnsrg-tf."
PS C:\Workspace\tf\HelmOne> kubectl logs pod/aad-pod-identity-nmi-vtmwm
I0424 20:07:22.400942 1 server.go:196] status (404) took 80007557875 ns for req.method=GET reg.path=/metadata/identity/oauth2/token req.remote=10.0.8.7
E0424 20:08:44.427353 1 server.go:375] failed to get matching identities for pod: default/external-dns-84dcc5f68c-542mv, error: getting assigned identities for pod default/external-dns-84dcc5f68c-542mv in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
I0424 20:08:44.427400 1 server.go:196] status (404) took 80025612263 ns for req.method=GET reg.path=/metadata/identity/oauth2/token req.remote=10.0.8.7
PS C:\Workspace\TF\HelmOne> kubectl logs pod/aad-pod-identity-mic-86944f67b8-k4hds
I0422 21:05:11.298958 1 main.go:114] starting mic process. Version: v1.7.5. Build date: 2021-04-02-21:14
W0422 21:05:11.299031 1 main.go:119] --kubeconfig not passed will use InClusterConfig
I0422 21:05:11.299038 1 main.go:136] kubeconfig () cloudconfig (/etc/kubernetes/azure.json)
I0422 21:05:11.299205 1 main.go:144] running MIC in namespaced mode: false
I0422 21:05:11.299223 1 main.go:148] client QPS set to: 5. Burst to: 5
I0422 21:05:11.299243 1 mic.go:139] starting to create the pod identity client. Version: v1.7.5. Build date: 2021-04-02-21:14
I0422 21:05:11.318835 1 mic.go:145] Kubernetes server version: v1.18.14
I0422 21:05:11.319465 1 cloudprovider.go:122] MIC using user assigned identity: c380##### REDACTED #####814b for authentication.
I0422 21:05:11.392322 1 probes.go:41] initialized health probe on port 8080
I0422 21:05:11.392351 1 probes.go:44] started health probe
I0422 21:05:11.392458 1 metrics.go:341] registered views for metric
I0422 21:05:11.392544 1 prometheus_exporter.go:21] starting Prometheus exporter
I0422 21:05:11.392561 1 metrics.go:347] registered and exported metrics on port 8888
I0422 21:05:11.392568 1 mic.go:244] initiating MIC Leader election
I0422 21:05:11.393053 1 leaderelection.go:243] attempting to acquire leader lease default/aad-pod-identity-mic...
E0423 01:47:52.730839 1 leaderelection.go:325] error retrieving resource lock default/aad-pod-identity-mic: etcdserver: request timed out
resource "helm_release" "external-dns" {
name = "external-dns"
repository = "https://charts.bitnami.com/bitnami"
chart = "external-dns"
namespace = "default"
version = "4.0.0"
set {
name = "azure.cloud"
value = "AzurePublicCloud"
}
#MyDnsResourceGroup
set {
name = "azure.resourceGroup"
value = data.azurerm_resource_group.dnsrg.name
}
set {
name = "azure.tenantId"
value = data.azurerm_subscription.currenttenantid.tenant_id
}
set {
name = "azure.subscriptionId"
value = data.azurerm_subscription.currentSubscription.subscription_id
}
set {
name = "azure.userAssignedIdentityID"
value = azurerm_user_assigned_identity.external-dns-mi-tf.client_id
}
#Verbosity of the logs (options: panic, debug, info, warning, error, fatal, trace)
set {
name = "logLevel"
value = "trace"
}
set {
name = "sources"
value = "{service,ingress}"
}
set {
name = "domainFilters"
value = "{${var.child_domain_prefix}.${lower(var.parent_domain)}}"
}
#DNS provider where the DNS records will be created (mandatory) (options: aws, azure, google, ...)
set {
name = "provider"
value = "azure"
}
#podLabels: {aadpodidbinding: <selector>} # selector you defined above in AzureIdentityBinding
set {
name = "podLabels.aadpodidbinding"
value = "external-dns"
}
set {
name = "azure.useManagedIdentityExtension"
value = true
}
}
resource "helm_release" "aad-pod-identity" {
name = "aad-pod-identity"
repository = "https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts"
chart = "aad-pod-identity"
}
resource "helm_release" "ingress-azure" {
name = "ingress-azure"
repository = "https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/"
chart = "ingress-azure"
namespace = "default"
version = "1.4.0"
set {
name = "debug"
value = "true"
}
set {
name = "appgw.name"
value = data.azurerm_application_gateway.appgwpub.name
}
set {
name = "appgw.resourceGroup"
value = data.azurerm_resource_group.appgwpubrg.name
}
set {
name = "appgw.subscriptionId"
value = data.azurerm_subscription.currentSubscription.subscription_id
}
set {
name = "appgw.usePrivateIP"
value = "false"
}
set {
name = "armAuth.identityClientID"
value = azurerm_user_assigned_identity.agic-mi-tf.client_id
}
set {
name = "armAuth.identityResourceID"
value = azurerm_user_assigned_identity.agic-mi-tf.id
}
set {
name = "armAuth.type"
value = "aadPodIdentity"
}
set {
name = "rbac.enabled"
value = "true"
}
set {
name = "verbosityLevel"
value = "5"
}
set {
name = "appgw.environment"
value = "AZUREPUBLICCLOUD"
}
set {
name = "metadata.name"
value = "ingress-azure"
}
}
PS C:\Workspace\tf\HelmOne> kubectl get azureassignedidentities
NAME AGE
ingress-azure-68c97fd496-qbptf-default-ingress-azure 23h
PS C:\Workspace\tf\HelmOne> kubectl get azureidentity
NAME AGE
ingress-azure 23h
one-hi-sso-agic-mi-tf 23h
one-hi-sso-external-dns-mi-tf 23h
PS C:\Workspace\tf\HelmOne> kubectl edit azureidentity one-hi-sso-external-dns-mi-tf
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"aadpodidentity.k8s.io/v1","kind":"AzureIdentity","metadata":{"annotations":{},"name":"one-hi-sso-external-dns-mi-tf","namespace":"default"},"spec":{"clientID":"f58e7c55-REDACTED-a6e358e53912","resourceID":"/subscriptions/8fb55161-REDACTED-3400b5271a8c/resourceGroups/one-hi-sso-kuberg-tf/providers/Microsoft.ManagedIdentity/userAssignedIdentities/one-hi-sso-external-dns-mi-tf","type":0}}
creationTimestamp: "2021-04-22T20:44:42Z"
generation: 2
name: one-hi-sso-external-dns-mi-tf
namespace: default
resourceVersion: "432055"
selfLink: /apis/aadpodidentity.k8s.io/v1/namespaces/default/azureidentities/one-hi-sso-external-dns-mi-tf
uid: f8e22fd9-REDACTED-6cdead0d7e22
spec:
clientID: f58e7c55-REDACTED-a6e358e53912
resourceID: /subscriptions/8fb55161-REDACTED-3400b5271a8c/resourceGroups/one-hi-sso-kuberg-tf/providers/Microsoft.ManagedIdentity/userAssignedIdentities/one-hi-sso-external-dns-mi-tf
type: 0
PS C:\Workspace\tf\HelmOne> kubectl edit azureidentitybinding external-dns-mi-binding
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"aadpodidentity.k8s.io/v1","kind":"AzureIdentityBinding","metadata":{"annotations":{},"name":"external-dns-mi-binding","namespace":"default"},"spec":{"AzureIdentity":"one-hi-sso-external-dns-mi-tf","Selector":"external-dns"}}
creationTimestamp: "2021-04-22T20:44:42Z"
generation: 1
name: external-dns-mi-binding
namespace: default
resourceVersion: "221101"
selfLink: /apis/aadpodidentity.k8s.io/v1/namespaces/default/azureidentitybindings/external-dns-mi-binding
uid: f39e7418-e896-4b8e-b596-035cf4b66252
spec:
AzureIdentity: one-hi-sso-external-dns-mi-tf
Selector: external-dns
resource "kubectl_manifest" "one-hi-sso-external-dns-mi-tf" {
yaml_body = <<YAML
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: one-hi-sso-external-dns-mi-tf
namespace: default
spec:
type: 0
resourceID: /subscriptions/8fb55161-REDACTED-3400b5271a8c/resourceGroups/one-hi-sso-kuberg-tf/providers/Microsoft.ManagedIdentity/userAssignedIdentities/one-hi-sso-external-dns-mi-tf
clientID: f58e7c55-REDACTED-a6e358e53912
YAML
}
resource "kubectl_manifest" "external-dns-mi-binding" {
yaml_body = <<YAML
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: external-dns-mi-binding
spec:
AzureIdentity: one-hi-sso-external-dns-mi-tf
Selector: external-dns
YAML
}
The managed identity I’m using was not added to the virtual machine scale set VMSS. Once I added it, the binding works and the azureAssignedIdentity was created.
Also - I converted the AzureIdentity and Selector lines in my AzureIdentity YAML from upper case first letters to lower case first letters.
Correct:
azureIdentity:
selector: