From Azure we try to create container using the Azure Container Instances with prepared YAML. From the machine where we execute az container create command we can login successfully to our private registry (e.g fa-docker-snapshot-local.docker.comp.dev on JFrog Artifactory ) after entering password and we can docker pull it as well
docker login fa-docker-snapshot-local.docker.comp.dev -u svc-faselect
Login succeeded
So we can pull it successfully and the image path is the same like when doing manually docker pull:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
We have YAML file for deploy, and trying to create container using the az command from the SAME server. In the YAML file we have set up the same registry information: server, username and password and the same image
az container create --resource-group FRONT-SELECT-NA2 --file ads-azure.yaml
When we try to execute this command, it takes for 30 minutes and after that message is displayed: "Deployment failed. Operation failed with status 200: Resource State Failed"
Full Yaml:
apiVersion: '2019-12-01'
location: eastus2
name: ads-test-group
properties:
containers:
- name: front-arena-ads-test
properties:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
environmentVariables:
- name: 'DBTYPE'
value: 'odbc'
command:
- /opt/front/arena/sbin/ads_start
- ads_start
- '-unicode'
- '-db_server test01'
- '-db_name HEDGE2_ADM_Test1'
- '-db_user sqldbadmin'
- '-db_password pass'
- '-db_client_user HEDGE2_ADM_Test1'
- '-db_client_password Password55'
ports:
- port: 9000
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 4
volumeMounts:
- mountPath: /opt/front/arena/host
name: ads-filesharevolume
imageRegistryCredentials: # Credentials to pull a private image
- server: fa-docker-snapshot-local.docker.comp.dev
username: svcacct-faselect
password: test
ipAddress:
type: Private
ports:
- protocol: tcp
port: '9000'
volumes:
- name: ads-filesharevolume
azureFile:
sharename: azurecontainershare
storageAccountName: frontarenastorage
storageAccountKey: kdUDK97MEB308N=
networkProfile:
id: /subscriptions/746feu-1537-1007-b705-0f895fc0f7ea/resourceGroups/SELECT-NA2/providers/Microsoft.Network/networkProfiles/fa-aci-test-networkProfile
osType: Linux
restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups
Can you please help us why this error occurs?
Thank you
According to my knowledge, there is nothing wrong with your YAML file, I only can give you some possible reasons.
Make sure the configurations are all right, the server URL, username, and password, also include the image name and tag;
Change the port from '9000' into 9000``, I mean remove the double quotes;
Take a look at the Note, maybe the mount volume makes a crash to the container. Then you need to mount the file share to a new folder, I mean the new folder that does not exist before.
Related
Below is my yaml file to create a container group with two containers names as fluentd and mapp.
But for the mapp container I want to get the image from a private repository. I am not using Azure Container Registry, I do not have an experience with it either.
I want to push the logs to Loganalytics.
apiVersion: 2019-12-01
location: eastus2
name: mycontainergroup003
properties:
containers:
- name: mycontainer003
properties:
environmentVariables: []
image: fluent/fluentd
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
- name: mapp-log
properties:
image: reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
- port: 8080
command: - /bin/sh - -c - > i=0; while true; do echo "$i: $(date)" >> /var/log/1.log; echo "$(date) INFO $i" >> /var/log/2.log; i=$((i+1)); sleep 1; done
imageRegistryCredentials:
- server: reg-dev.rx.com
username: <username>
password: <password>
osType: Linux
restartPolicy: Always
diagnostics:
logAnalytics:
workspaceId: <id>
workspaceKey: <key>
tags: null
type: Microsoft.ContainerInstance/containerGroups
I am executing below command to run the yaml:
>az container create -g rg-np-tp-ip01-deployt-docker-test --name mycontainergroup003 --file .\azure-deploy-aci-2.yaml
(InaccessibleImage) The image 'reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest' in container group 'mycontainergroup003' is not accessible. Please check the image and registry credential.
Code: InaccessibleImage
Message: The image 'reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest' in container
group 'mycontainergroup003' is not accessible. Please check the image and registry credential.
How can I make the imageregistry reg-dev.rx.com accessible from Azure. Till now, I used the same imageregistry in every yaml and ran 'kubectl apply' command. But now I am trying to run the yaml via Azure cli.
Can someone please help?
The Error you are getting usually comes when you are giving wrong name and credentials for login server or Image that you are trying to pull.
I Can not tested as which private registry you are trying to use. But same thing can use achive using Azure Container registry. I tested in my environment and its working fine for me same you can apply in your environment as well.
You can pushed your existing image into ACR using below command
Example : you can apply like this below
Step 1 : login in azure
az login
Step 2: Created Container Registry
az acr create -g "<resource group>" -n "TestMyAcr90" --sku Basic --admin-enabled true
.
Step 3 :Tag docker image in the following format loginserver/imagename
docker tag 0e901e68141f testmyacr90.azurecr.io/my_nginx
Step 4 : login to ACR.
docker login testmyacr90.azurecr.io
Step 5 : Push docker images into container registry
docker push testmyacr90.azurecr.io/my_nginx
YAML FILE
apiVersion: 2019-12-01
location: eastus2
name: mycontainergroup003
properties:
containers:
- name: mycontainer003
properties:
environmentVariables: []
image: fluent/fluentd
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
- name: mapp-log
properties:
image: testmyacr90.azurecr.io/my_nginx:latest
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
- port: 8080
command:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
imageRegistryCredentials:
- server: testmyacr90.azurecr.io
username: TestMyAcr90
password: SJ9I6XXXXXXXXXXXZXVSgaH
osType: Linux
restartPolicy: Always
diagnostics:
logAnalytics:
workspaceId: dc742888-fd4d-474c-b23c-b9b69de70e02
workspaceKey: ezG6IXXXXX_XXXXXXXVMsFOosAoR+1zrCDp9ltA==
tags: null
type: Microsoft.ContainerInstance/containerGroups
You can get the loginserver name , Username and password of ACR from here.
Succesfully run the file and able to create Container Group along with two container as declare in file.
I need to pull the image from public docker repository i.e hello-world:latest and run that image on kubernetes. I created cluster using Kind . I ran that image using the below command
kubectl run test-pod --image=hello-world
Then I did
kubectl describe pods
to get the status of the pods. It threw me ImagePullBackOff error . Please find the snapshot below. It seems there is some network issue when pulling the image using kind cluster. Although I am able to pull image from docker easily .
Have searched the whole internet regarding this issue but nothing worked out. Following is my pod specification :
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-05-16T15:01:17Z"
labels:
run: test-pod
name: test-pod
namespace: default
resourceVersion: "4370"
uid: 6ef121e2-805b-4022-9a13-c17c031aea88
spec:
containers:
- image: hello-world
imagePullPolicy: Always
name: test-pod
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-jjsmp
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: kind-control-plane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-jjsmp
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-05-16T15:01:17Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-05-16T15:01:17Z"
message: 'containers with unready status: [test-pod]'
reason: ContainersNotReady
status: "False"
type: Ready
containerStatuses:
- image: hello-world
imageID: ""
lastState: {}
name: test-pod
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "hello-world"
reason: ImagePullBackOff
hostIP: 172.18.0.2
phase: Pending
podIP: 10.244.0.5
podIPs:
- ip: 10.244.0.5
qosClass: BestEffort
startTime: "2022-05-16T15:01:17Z"
The ImagePullBackOff error means that Kubernetes couldn't pull the image from the registry and will keep trying to pull the image until it reaches a compiled-in limit of 300 seconds (5 minutes). This issue could happen because Kubernetes is facing one of the following conditions:
You have exceeded the rate or download limit on the registry.
The image registry requires authentication.
There is a typo in the image name or tag.
The image or tag does not exist.
You can start reviewing if you can pull the image locally or try to ssh jumping on the node and run docker pull and get the image directly.
If you still can't pull the image, another option is to add 8.8.8.8 to /etc/resolv.conf.
Update:
To avoid exposing your kind cluster to the internet try to pull the image locally at your PC by manually specifying a new path from a different registry.
Sample:
docker pull myregistry.local:5000/testing/test-image
I created an AKS cluster with http enabled.Also I have my project with dev spaces enabled to use the cluster.While runing azds up the app is creating all necessary deployment files (helm.yaml,charts.yaml,values.yaml).However I want to access my app using a public endpoint with dev space url but when I do azds list-uris it is only giving localhost url and not the url with dev space enabled.
Can anyone please help?
My azds.yaml looks like below
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
# Optionally, specify an array of imagePullSecrets. These secrets must be manually created in the namespace.
# This will override the imagePullSecrets array in values.yaml file.
# If the dockerfile specifies any private registry, the imagePullSecret for that registry must be added here.
# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
#
# For example, the following uses credentials from secret "myRegistryKeySecretName".
#
# imagePullSecrets:
# - name: myRegistryKeySecretName
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to form the service's public URL: [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg, webfrontend]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
I followed below guide
https://microsoft.github.io/AzureTipsAndTricks/blog/tip228.html
AZDS up is giving end point to my localhost
Service 'webfrontend' port 80 (http) is available via port forwarding at http://localhost:50597
Has your azds.yaml file ingress definition to the public 'webfrontend' domain?
Here is an example azds.yaml file created using .NET Core sample application:
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
More about it: https://learn.microsoft.com/pl-pl/azure/dev-spaces/how-dev-spaces-works-prep
How many service logs do you see in 'azds up' log, are you watching something similar to:
Service 'webfrontend' port 'http' is available at `http://webfrontend.XXX
Did you follow this guide?
https://learn.microsoft.com/pl-pl/azure/dev-spaces/troubleshooting#dns-name-resolution-fails-for-a-public-url-associated-with-a-dev-spaces-service
Do you have the latest version of the azds?
I would like to link two Docker containers deployed on Azure (ACS).
I have a container running the api server made in NodeJs and another container running Mongo.
I'd like to use something like "--link mymongodb" as I do on my pc, but there is no such parameter in az container create.
To create the containers I use this syntax:
az container create --name my-app --image myprivateregistry/my-app --resource-group MyResourceGroup --ports 80 --ip-address public
Probably I need to create a Virtual Network?
Could you point me to the right direction please?
I think you are searching the features like docker compose on Azure. If you want to use the Azure Container Instance, you should take a look at Deploy a multi-container container group with YAML or with Azure Template. It will help you to create multi-container in a container group and the containers can connect to each other.
In addition, you can try with Azure Kubernetes Service, maybe it also can help you. If you need more help please give me the message.
You will need to specify both images (your app and mongo) in an Azure yml file. It looks like a docker compose yml file, but it isn't.
Assuming your node.js app runs on port 3000, this could be a yml configuration for Azure container services:
apiVersion: 2018-06-01
location: westeurope
name: my-app-with-mongo
properties:
containers:
- name: mongodb
properties:
image: mongo
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 27017
- name: my-app
properties:
image: myprivateregistry/my-app
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 3000
osType: Linux
ipAddress:
type: Public
dnsNameLabel: my-app
ports:
- protocol: tcp
port: '3000'
imageRegistryCredentials:
- server: myprivateregistry.azurecr.io
username: username-for-myprivateregistry
password: password-for-myprivateregistry
tags: null
type: Microsoft.ContainerInstance/containerGroups
Simply start it with
az container create --resource-group MyResourceGroup --file azure-container-group.yml
You can then access your mongo database on localhost:27017 as all containers run on the same host:
Azure Container Instances supports the deployment of multiple
containers onto a single host by using a container group.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-multi-container-yaml
Also mind the order of the containers you specify in the yml file. You will want to start mongo first, then you node.js app as it probably wants to connect to mongo on start.
I have been trying to mount a file share on Kubernetes pod hosted on AKS in Azure. So far, I have tried to:
1. Successfully created a secret by base64 encoding the name and the key
2. Create a yaml by specifying the correct configurations
3. Once I apply it using kubectl apply -f azure-file-pod.yaml, it gives me the following error:
Output: mount error: could not resolve address for
demo.file.core.windows.net: Unknown error
I have an Azure File Share by the name of demo.
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
name: azure-files-pod
spec:
containers:
- image: microsoft/sample-aks-helloworld
name: azure
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: demo
readOnly: false
How can this possibly be resolved?