reconciliation failed: Helm install failed: services is forbidden: User "system:serviceaccount:flux-system:helm-controller" cannot create resource "services" in API group │
│ "" in the namespace "kube-system": GKEAutopilot authz: the namespace "kube-system" is managed and the request's verb "create" is denied │
│ Warning error 21s helm-controller reconciliation failed: Operation cannot be fulfilled on helmreleases.helm.toolkit.fluxcd.io "pulsar": the object has been modified; please apply your changes to the lates │
│ t version and try again
Is there some option I could --set to avoid this?
Is there a component I have to disable to avoid this?
I'm trying to install pulsar via Helm and via flux.cd
My two sources are:
helmrepo.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: pulsar
namespace: pulsar
spec:
interval: 1m0s
url: https://pulsar.apache.org/charts
helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: pulsar
namespace: pulsar
spec:
chart:
spec:
chart: pulsar
sourceRef:
kind: HelmRepository
name: pulsar
version: 3.0.0
interval: 1m0s
values:
volumes:
persistence: false
components:
zookeeper: true
bookkeeper: true
autorecovery: true
broker: true
functions: false
proxy: false
toolset: false
pulsar_manager: true
monitoring:
prometheus: false
grafana: false
initialize: true
```yaml
Related
I have deployed an Event Hub triggered Azure Function written in Java on AKS. The function should scale out using KEDA.
The function is correctly triggerd and working but it's not scaling out when the load increases. I have added sleep calls to the function implementation to make sure it's not burning through the events too fast and should be forced to scale out but this did not show any change as well.
kubectl get hpa shows the following output
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-eventlogger Deployment/eventlogger 64/64 (avg) 1 20 1 3m41s
This seems to be a first indicator that something is not correct as i assume the first number in the targets column is the number of unprocessed events in event hub. This stays the same no matter how many events i pump into the hub.
The Function was deployed using the following Kubernetes Deployment Manifest
data:
AzureWebJobsStorage: <removed>
FUNCTIONS_WORKER_RUNTIME: amF2YQ==
EventHubConnectionString: <removed>
apiVersion: v1
kind: Secret
metadata:
name: eventlogger
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventlogger
labels:
app: eventlogger
spec:
selector:
matchLabels:
app: eventlogger
template:
metadata:
labels:
app: eventlogger
spec:
containers:
- name: eventlogger
image: <removed>
env:
- name: AzureFunctionsJobHost__functions__0
value: eventloggerHandler
envFrom:
- secretRef:
name: eventlogger
readinessProbe:
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 240
httpGet:
path: /
port: 80
scheme: HTTP
startupProbe:
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 240
httpGet:
path: /
port: 80
scheme: HTTP
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: eventlogger
labels:
app: eventlogger
spec:
scaleTargetRef:
name: eventlogger
pollingInterval: 5
cooldownPeriod: 5
minReplicaCount: 0
maxReplicaCount: 20
triggers:
- type: azure-eventhub
metadata:
storageConnectionFromEnv: AzureWebJobsStorage
connectionFromEnv: EventHubConnectionString
---
The Connection String of Event Hub contains the "EntityPath=" Section as described in the KEDA Event Hub Scaler Documentation and has Manage-Permissions on the Event Hub Namespace.
The output of kubectl describe ScaledObject is
Name: eventlogger
Namespace: default
Labels: app=eventlogger
scaledobject.keda.sh/name=eventlogger
Annotations: <none>
API Version: keda.sh/v1alpha1
Kind: ScaledObject
Metadata:
Creation Timestamp: 2022-04-17T10:30:36Z
Finalizers:
finalizer.keda.sh
Generation: 1
Managed Fields:
API Version: keda.sh/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:app:
f:spec:
.:
f:cooldownPeriod:
f:maxReplicaCount:
f:minReplicaCount:
f:pollingInterval:
f:scaleTargetRef:
.:
f:name:
f:triggers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-04-17T10:30:36Z
API Version: keda.sh/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"finalizer.keda.sh":
f:labels:
f:scaledobject.keda.sh/name:
f:status:
.:
f:conditions:
f:externalMetricNames:
f:lastActiveTime:
f:originalReplicaCount:
f:scaleTargetGVKR:
.:
f:group:
f:kind:
f:resource:
f:version:
f:scaleTargetKind:
Manager: keda
Operation: Update
Time: 2022-04-17T10:30:37Z
Resource Version: 1775052
UID: 3b6a68c1-c3b9-4cdf-b5d5-41a9721ac661
Spec:
Cooldown Period: 5
Max Replica Count: 20
Min Replica Count: 0
Polling Interval: 5
Scale Target Ref:
Name: eventlogger
Triggers:
Metadata:
Connection From Env: EventHubConnectionString
Storage Connection From Env: AzureWebJobsStorage
Type: azure-eventhub
Status:
Conditions:
Message: ScaledObject is defined correctly and is ready for scaling
Reason: ScaledObjectReady
Status: False
Type: Ready
Message: Scaling is performed because triggers are active
Reason: ScalerActive
Status: True
Type: Active
Status: Unknown
Type: Fallback
External Metric Names:
s0-azure-eventhub-$Default
Last Active Time: 2022-04-17T10:30:47Z
Original Replica Count: 1
Scale Target GVKR:
Group: apps
Kind: Deployment
Resource: deployments
Version: v1
Scale Target Kind: apps/v1.Deployment
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal KEDAScalersStarted 10s keda-operator Started scalers watch
Normal ScaledObjectReady 10s keda-operator ScaledObject is ready for scaling
So i'm a bit stucked as i don't see any errors but it's still not behaving as expected.
Versions:
Kubernetes version: 1.21.9
KEDA Version: 2.6.1 installed using kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.6.1/keda-2.6.1.yaml
Azure Functions using Java 11 and extensionBundle in host.json is configured using version [2.8.4, 3.0.0)
Was able to find a solution to the problem.
Event Hub triggered Azure Functions deployed on AKS show the same scaling characteristics as Azure Functions on App Service show:
You only get one consumer per partition to allow for ordering per partition.
This characteristic overrules maxReplicaCount in the Kubernetes Deployment Manifest.
So to solve my own issue: By increasing the partitions for Event Hub i get a pod per partition and KEDA scales the workload as expected.
When configuring Slack Alerts in AlertmanagerConfig, I am getting following error (when releasing helm chart on Kubernetes cluster)
Error: UPGRADE FAILED: error validating "": error validating data:
ValidationError(AlertmanagerConfig.spec.receivers[0]): unknown field
"slack_configs" in
com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers
My alertmanagerconfig.yaml file looks as follows:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: {{ template "theresa.fullname" . }}-alertmanager-config
labels:
alertmanagerConfig: email-notifications
spec:
route:
receiver: 'slack-email'
receivers:
- name: 'slack-email'
slack_configs:
- channel: '#cmr-orange-alerts'
api_url: ..
send_resolved: true
icon_url: ..
title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
text: ..
You are trying to create a k8s resource of kind: AlertmanagerConfig but use the syntax of Prometheus config file, not a resource's syntax.
Check the syntax here:
https://docs.openshift.com/container-platform/4.7/rest_api/monitoring_apis/alertmanagerconfig-monitoring-coreos-com-v1alpha1.html
I am attempting to deploy a Helm chart to AKS using FluxCD. The chart has been pushed to Azure ACR using the Helm cli - "helm push ...". The chart is declared in the ACR as helm/release-services:0.1.0
I am receiving the following error after a Flux reconcile:
'chart pull error: failed to get chart version for remote reference:
no chart name found'
with helm-controller logs as follows
{"level":"info","ts":"2022-02-07T12:40:18.121Z","logger":"controller.helmrelease","msg":"HelmChart 'flux-system/release-services-test-release-services' is not ready","reconciler group":"helm.toolkit.fluxcd.io","reconciler kind":"HelmRelease","name":"release-services","namespace":"release-services-test"}
{"level":"info","ts":"2022-02-07T12:40:18.135Z","logger":"controller.helmrelease","msg":"reconcilation finished in 15.458307ms, next run in 5m0s","reconciler group":"helm.toolkit.fluxcd.io","reconciler kind":"HelmRelease","name":"release-services","namespace":"release-services-test"}
Below is the HelmChart resource in AKS:
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmChart
metadata:
creationTimestamp: "2022-02-07T07:30:16Z"
finalizers:
- finalizers.fluxcd.io
generation: 1
name: release-services-test-release-services
namespace: flux-system
resourceVersion: "105266699"
selfLink: /apis/source.toolkit.fluxcd.io/v1beta1/namespaces/flux-system/helmcharts/release-services-test-release-services
uid: e4820a70-8885-44a1-8dfd-0e2bf7256915
spec:
chart: release-services
interval: 5m0s
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: psbombb-helm-acr-dev
version: '>=0.1.0'
status:
conditions:
- lastTransitionTime: "2022-02-07T11:02:49Z"
message: 'chart pull error: failed to get chart version for remote reference:
no chart name found'
reason: ChartPullFailed
status: "False"
type: Ready
observedGeneration: 1
and the HelmRelease is as follows
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
creationTimestamp: "2022-02-07T04:34:14Z"
finalizers:
- finalizers.fluxcd.io
generation: 9
labels:
kustomize.toolkit.fluxcd.io/name: apps
kustomize.toolkit.fluxcd.io/namespace: flux-system
name: release-services
namespace: release-services-test
resourceVersion: "105341484"
selfLink: /apis/helm.toolkit.fluxcd.io/v2beta1/namespaces/release-services-test/helmreleases/release-services
uid: 6a6e5f5c-951d-4655-9c15-fa9fe7421a04
spec:
chart:
spec:
chart: release-services
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: psbombb-helm-acr-dev
namespace: flux-system
version: '>=0.1.0'
install:
remediation:
retries: 3
interval: 5m
releaseName: release-services
timeout: 12m
values:
image:
name: release-services
pullPolicy: IfNotPresent
registry: <repository>.azurecr.io
repository: <repository>.azurecr.io/helm/release-services
tag: 0.1.0
postgres:
secret:
create: false
existingName: release-services-secrets
status:
conditions:
- lastTransitionTime: "2022-02-07T08:27:13Z"
message: HelmChart 'flux-system/release-services-test-release-services' is not
ready
reason: ArtifactFailed
status: "False"
type: Ready
failures: 50
helmChart: flux-system/release-services-test-release-services
observedGeneration: 9
Is there anything I am missing that anyone can spot for me please?
Thank you kindly
I think your issue is that the Azure Container Registry stores Helm Charts as OCI Artifacts.
The Flux source controller will pull the index.yaml from a HTTP Helm Chart repo to look for tags and this is not working with an OCI registry.
Here is the GitHub issue for this were you can see that the Flux guys will work on this as of now the OCI Feature is stable with Helm 3.8.0.
I'm trying to deploy Nextcloud w/HPA (replicas - horizontal scaling) on Azure Kubernetes with the official Nextcloud Helm chart and a ReadWriteMany volume created following these official instructions, but the volume never mounts, get this (or some version thereof) error:
kind: Event
apiVersion: v1
metadata:
name: nextcloud-6bc9b947bf-z6rlh.16bf7711bc2827a5
namespace: nextcloud
uid: c3c5619b-19da-4070-afbb-24bce111ddbe
resourceVersion: '55858'
creationTimestamp: '2021-12-10T18:08:27Z'
managedFields:
- manager: kubelet
operation: Update
apiVersion: v1
time: '2021-12-10T18:08:27Z'
fieldsType: FieldsV1
fieldsV1:
f:count: {}
f:firstTimestamp: {}
f:involvedObject: {}
f:lastTimestamp: {}
f:message: {}
f:reason: {}
f:source:
f:component: {}
f:host: {}
f:type: {}
involvedObject:
kind: Pod
namespace: nextcloud
name: nextcloud-6bc9b947bf-z6rlh
uid: 6106d13f-7033-4a4e-a6e9-a8e3947c52a4
apiVersion: v1
resourceVersion: '55764'
reason: FailedMount
message: >
MountVolume.MountDevice failed for volume "nextcloud-rwx" : rpc error: code =
Internal desc = volume(#azure-secret#aksshare#) mount
"//nextcloudcluster.file.core.windows.net/aksshare" on
"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/nextcloud-rwx/globalmount"
failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o
dir_mode=0777,file_mode=0777,gid=33,mfsymlinks,actimeo=30,<masked>
//nextcloudcluster.file.core.windows.net/aksshare
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/nextcloud-rwx/globalmount
Output: mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log
messages (dmesg)
source:
component: kubelet
host: aks-agentpool-16596208-vmss000002
firstTimestamp: '2021-12-10T18:08:27Z'
lastTimestamp: '2021-12-10T18:08:35Z'
count: 5
type: Warning
eventTime: null
reportingComponent: ''
reportingInstance: ''
Here is my PersistentVolume yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud-rwx
namespace: nextcloud
spec:
capacity:
storage: 32Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- gid=33
- mfsymlinks
PersistentVolumeClaim yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-rwx
namespace: nextcloud
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 32Gi
I've also tried changing uid and gid to 0, 1000, etc, and get an even more egregious permission denied message because it doesn't "match the fsgroup(33)" (hence why I tried with gid=33).
Any ideas would be greatly appreciated! Thank you for your time.
I have gone through all the motions and I have what appears to be a common problem. Unfortunately, all of the solutions I've tried from github and SO have yet to work. Here's the error:
Warning Failed 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 Failed to pull image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi": [rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required]
-- created the service principal
az ad sp create-for-rbac
--scopes /subscriptions/11870e73-bdb2-47b0-bf27-25d24c41ae24/resourcegroups/USS-MicroService-Test/providers/Microsoft.ContainerRegistry/registries/UssMicroServiceRegistry
--role Reader
--name kimage-reader
-- created the secret for Kube
kubectl create secret docker-registry kimagereadersecret --docker-server ussmicroserviceregistry.azurecr.io --docker-email coreyp#united-systems.com --docker-username=kimage-reader --docker-password 4b37b896-a04e-48b4-a950-5f1abdd3e7aa
-- kubectl.exe describe pod simpledotnetapi-deployment-6fbf97df55-2hg2m
Name: simpledotnetapi-deployment-6fbf97df55-2hg2m
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: aks-agentpool-97052351-0/10.240.0.4
Start Time: Mon, 17 Jun 2019 15:22:30 -0500
Labels: app=simpledotnetapi-pod
pod-template-hash=6fbf97df55
Annotations: <none>
Status: Pending
IP: 10.240.0.26
Controlled By: ReplicaSet/simpledotnetapi-deployment-6fbf97df55
Containers:
simpledotnetapi-simpledotnetapi:
Container ID:
Image: ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi
Image ID:
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hj9b5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-hj9b5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hj9b5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned default/simpledotnetapi-deployment-6fbf97df55-2hg2m to aks-agentpool-97052351-0
Normal BackOff 4m (x6 over 5m) kubelet, aks-agentpool-97052351-0 Back-off pulling image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi"
Normal Pulling 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 pulling image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi"
Warning Failed 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 Failed to pull image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi": [rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required]
Warning Failed 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 Error: ErrImagePull
Warning Failed 24s (x22 over 5m) kubelet, aks-agentpool-97052351-0 Error: ImagePullBackOff
-- kubectl.exe get pod simpledotnetapi-deployment-6fbf97df55-2hg2m -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2019-06-17T20:22:30Z
generateName: simpledotnetapi-deployment-6fbf97df55-
labels:
app: simpledotnetapi-pod
pod-template-hash: 6fbf97df55
name: simpledotnetapi-deployment-6fbf97df55-2hg2m
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: simpledotnetapi-deployment-6fbf97df55
uid: a99e4ac8-8ec3-11e9-9bf8-86d46846735e
resourceVersion: "813190"
selfLink: /api/v1/namespaces/default/pods/simpledotnetapi-deployment-6fbf97df55-2hg2m
uid: a1c220a2-913d-11e9-801a-c6aef815c06a
spec:
containers:
- image: ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi
imagePullPolicy: Always
name: simpledotnetapi-simpledotnetapi
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-hj9b5
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: kimagereadersecret
nodeName: aks-agentpool-97052351-0
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-hj9b5
secret:
defaultMode: 420
secretName: default-token-hj9b5
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2019-06-17T20:22:30Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-06-17T20:22:30Z
message: 'containers with unready status: [simpledotnetapi_simpledotnetapi]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-06-17T20:22:30Z
message: 'containers with unready status: [simpledotnetapi_simpledotnetapi]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2019-06-17T20:22:30Z
status: "True"
type: PodScheduled
containerStatuses:
- image: ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi
imageID: ""
lastState: {}
name: simpledotnetapi-simpledotnetapi
ready: false
restartCount: 0
state:
waiting:
message: Back-off pulling image "ussmicroserviceregistry.azurecr.io/simpledotnetapi-simpledotnetapi"
reason: ImagePullBackOff
hostIP: 10.240.0.4
phase: Pending
podIP: 10.240.0.26
qosClass: BestEffort
startTime: 2019-06-17T20:22:30Z
-- yaml configuration file
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpledotnetapi-deployment
spec:
replicas: 3
selector:
matchLabels:
app: simpledotnetapi-pod
template:
metadata:
labels:
app: simpledotnetapi-pod
spec:
imagePullSecrets:
- name: kimagereadersecret
containers:
- name: simpledotnetapi_simpledotnetapi
image: ussmicroserviceregistry.azurecr.io/simpledotnetapi-simpledotnetapi
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: simpledotnetapi-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: simpledotnetapi
type: front-end
-- output of kubectl get secret kimagereadersecret
NAME TYPE DATA AGE
kimagereadersecret kubernetes.io/dockerconfigjson 1 1h
-- credentials/secret from Kube dashboard
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "kimagereadersecret",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/secrets/kimagereadersecret",
"uid": "86006aff-9156-11e9-801a-c6aef815c06a",
"resourceVersion": "830006",
"creationTimestamp": "2019-06-17T23:20:41Z"
},
"data": {
".dockerconfigjson": "eyJhdXRocyI6eyJ1c3NtaWNyb3NlcnZpY2VyZWdpc3RyeS5henVyZWNyLmlvIjp7InVzZXJuYW1lIjoiMzNjYjBjZTQtOTVmMC00NGJkLWJiYmYtNTZkNTA2ZmY0ZWIzIiwicGFzc3dvcmQiOiI0YjM3Yjg5Ni1hMDRlLTQ4YjQtYTk1MC01ZjFhYmRkM2U3YWEiLCJlbWFpbCI6ImNvcmV5cEB1bml0ZWQtc3lzdGVtcy5jb20iLCJhdXRoIjoiTXpOallqQmpaVFF0T1RWbU1DMDBOR0prTFdKaVltWXROVFprTlRBMlptWTBaV0l6T2pSaU16ZGlPRGsyTFdFd05HVXRORGhpTkMxaE9UVXdMVFZtTVdGaVpHUXpaVGRoWVE9PSJ9fX0="
},
"type": "kubernetes.io/dockerconfigjson"
}
-- Full dump from the Kube Dashboard
Failed to pull image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi": [rpc error: code = Unknown desc = Error response from daemon: manifest for ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi:latest not found: manifest unknown: manifest unknown, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required]
The entire project is in GitHub # https://github.com/coreyperkins/KubeSimpleDotNetApi
-- ACR screenshot
-- Pod Failure in Kube
I'm fairly certain you didn't give it enough permissions:
az ad sp create-for-rbac
--scopes /subscriptions/11870e73-bdb2-47b0-bf27-25d24c41ae24/resourcegroups/USS-MicroService-Test/providers/Microsoft.ContainerRegistry/registries/UssMicroServiceRegistry
--role Reader
--name kimage-reader
role should be acrpull, not reader. and just delete this secret: `kimagereadersecret 1 and reference to it in the pod. kubernetes will handle that for you.
Looks like you may be missing the kimagereadersecret in your Kubernetes cluster. As I understand az ad sp create-for-rbac just creates access to Azure resources, but how does k8s know which credentials to use to pull from the registry? You can follow this to create the registry secret. You can check that it exists with:
$ kubectl get secret kimagereadersecret
In your case, it could be that it's defaulting to no credentials or using whatever you have configured for Docker which doesn't have access to ussmicroserviceregistry.azurecr.io/simpledotnetapi-simpledotnetapi
For your issue, maybe it's just a mistake that you make. All the things you have done is OK. Just in the deployment, you need to change the image with a tag like below:
image: ussmicroserviceregistry.azurecr.io/simpledotnetapi-simpledotnetapi:tag
Set the tag the same as you set in the ACR. Then it will work well. If you do not set tag, then it will use the default tag latest and it probably is not right.