Azure private registry for docker image - azure

Below is my yaml file to create a container group with two containers names as fluentd and mapp.
But for the mapp container I want to get the image from a private repository. I am not using Azure Container Registry, I do not have an experience with it either.
I want to push the logs to Loganalytics.
apiVersion: 2019-12-01
location: eastus2
name: mycontainergroup003
properties:
containers:
- name: mycontainer003
properties:
environmentVariables: []
image: fluent/fluentd
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
- name: mapp-log
properties:
image: reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
- port: 8080
command: - /bin/sh - -c - > i=0; while true; do echo "$i: $(date)" >> /var/log/1.log; echo "$(date) INFO $i" >> /var/log/2.log; i=$((i+1)); sleep 1; done
imageRegistryCredentials:
- server: reg-dev.rx.com
username: <username>
password: <password>
osType: Linux
restartPolicy: Always
diagnostics:
logAnalytics:
workspaceId: <id>
workspaceKey: <key>
tags: null
type: Microsoft.ContainerInstance/containerGroups
I am executing below command to run the yaml:
>az container create -g rg-np-tp-ip01-deployt-docker-test --name mycontainergroup003 --file .\azure-deploy-aci-2.yaml
(InaccessibleImage) The image 'reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest' in container group 'mycontainergroup003' is not accessible. Please check the image and registry credential.
Code: InaccessibleImage
Message: The image 'reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest' in container
group 'mycontainergroup003' is not accessible. Please check the image and registry credential.
How can I make the imageregistry reg-dev.rx.com accessible from Azure. Till now, I used the same imageregistry in every yaml and ran 'kubectl apply' command. But now I am trying to run the yaml via Azure cli.
Can someone please help?

The Error you are getting usually comes when you are giving wrong name and credentials for login server or Image that you are trying to pull.
I Can not tested as which private registry you are trying to use. But same thing can use achive using Azure Container registry. I tested in my environment and its working fine for me same you can apply in your environment as well.
You can pushed your existing image into ACR using below command
Example : you can apply like this below
Step 1 : login in azure
az login
Step 2: Created Container Registry
az acr create -g "<resource group>" -n "TestMyAcr90" --sku Basic --admin-enabled true
.
Step 3 :Tag docker image in the following format loginserver/imagename
docker tag 0e901e68141f testmyacr90.azurecr.io/my_nginx
Step 4 : login to ACR.
docker login testmyacr90.azurecr.io
Step 5 : Push docker images into container registry
docker push testmyacr90.azurecr.io/my_nginx
YAML FILE
apiVersion: 2019-12-01
location: eastus2
name: mycontainergroup003
properties:
containers:
- name: mycontainer003
properties:
environmentVariables: []
image: fluent/fluentd
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
- name: mapp-log
properties:
image: testmyacr90.azurecr.io/my_nginx:latest
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
- port: 8080
command:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
imageRegistryCredentials:
- server: testmyacr90.azurecr.io
username: TestMyAcr90
password: SJ9I6XXXXXXXXXXXZXVSgaH
osType: Linux
restartPolicy: Always
diagnostics:
logAnalytics:
workspaceId: dc742888-fd4d-474c-b23c-b9b69de70e02
workspaceKey: ezG6IXXXXX_XXXXXXXVMsFOosAoR+1zrCDp9ltA==
tags: null
type: Microsoft.ContainerInstance/containerGroups
You can get the loginserver name , Username and password of ACR from here.
Succesfully run the file and able to create Container Group along with two container as declare in file.

Related

Running Terraform on Azure Container Instances (ACI)

I am trying to run some terraform code an Azure Container Instances. But I can't get it to actually start the container.
I am running it from cli, using a yml file. There is a main.tf with an hello world config
This is the yml
apiVersion: '2019-12-01'
location: ukwest
name: file-share-demo
properties:
containers:
- name: hellofiles
properties:
environmentVariables: []
command: [
terraform -chdir=/data/terraform/hello/ apply --auto-approve
]
image: hashicorp/terraform
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /data
name: filesharevolume
osType: Linux
restartPolicy: Never
volumes:
- name: filesharevolume
azureFile:
sharename: scripts
storageAccountName: 'name'
storageAccountKey: 'key'
tags: {}
type: Microsoft.ContainerInstance/containerGroups
I passed all kinds of commands but I can't get it to launch...
The command needed to get this working is
["/bin/sh", "-c", "/bin/terraform -chdir=/apps apply --auto-approve"]
if you try to make it into separate command options then it only runs the terraform part. I've no clue why, but that works!

Azure Containers deployment - "Operation failed with status 200: Resource State Failed"

From Azure we try to create container using the Azure Container Instances with prepared YAML. From the machine where we execute az container create command we can login successfully to our private registry (e.g fa-docker-snapshot-local.docker.comp.dev on JFrog Artifactory ) after entering password and we can docker pull it as well
docker login fa-docker-snapshot-local.docker.comp.dev -u svc-faselect
Login succeeded
So we can pull it successfully and the image path is the same like when doing manually docker pull:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
We have YAML file for deploy, and trying to create container using the az command from the SAME server. In the YAML file we have set up the same registry information: server, username and password and the same image
az container create --resource-group FRONT-SELECT-NA2 --file ads-azure.yaml
When we try to execute this command, it takes for 30 minutes and after that message is displayed: "Deployment failed. Operation failed with status 200: Resource State Failed"
Full Yaml:
apiVersion: '2019-12-01'
location: eastus2
name: ads-test-group
properties:
containers:
- name: front-arena-ads-test
properties:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
environmentVariables:
- name: 'DBTYPE'
value: 'odbc'
command:
- /opt/front/arena/sbin/ads_start
- ads_start
- '-unicode'
- '-db_server test01'
- '-db_name HEDGE2_ADM_Test1'
- '-db_user sqldbadmin'
- '-db_password pass'
- '-db_client_user HEDGE2_ADM_Test1'
- '-db_client_password Password55'
ports:
- port: 9000
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 4
volumeMounts:
- mountPath: /opt/front/arena/host
name: ads-filesharevolume
imageRegistryCredentials: # Credentials to pull a private image
- server: fa-docker-snapshot-local.docker.comp.dev
username: svcacct-faselect
password: test
ipAddress:
type: Private
ports:
- protocol: tcp
port: '9000'
volumes:
- name: ads-filesharevolume
azureFile:
sharename: azurecontainershare
storageAccountName: frontarenastorage
storageAccountKey: kdUDK97MEB308N=
networkProfile:
id: /subscriptions/746feu-1537-1007-b705-0f895fc0f7ea/resourceGroups/SELECT-NA2/providers/Microsoft.Network/networkProfiles/fa-aci-test-networkProfile
osType: Linux
restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups
Can you please help us why this error occurs?
Thank you
According to my knowledge, there is nothing wrong with your YAML file, I only can give you some possible reasons.
Make sure the configurations are all right, the server URL, username, and password, also include the image name and tag;
Change the port from '9000' into 9000``, I mean remove the double quotes;
Take a look at the Note, maybe the mount volume makes a crash to the container. Then you need to mount the file share to a new folder, I mean the new folder that does not exist before.

Issue when running application in devspaces under AKS cluster

I created an AKS cluster with http enabled.Also I have my project with dev spaces enabled to use the cluster.While runing azds up the app is creating all necessary deployment files (helm.yaml,charts.yaml,values.yaml).However I want to access my app using a public endpoint with dev space url but when I do azds list-uris it is only giving localhost url and not the url with dev space enabled.
Can anyone please help?
My azds.yaml looks like below
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
# Optionally, specify an array of imagePullSecrets. These secrets must be manually created in the namespace.
# This will override the imagePullSecrets array in values.yaml file.
# If the dockerfile specifies any private registry, the imagePullSecret for that registry must be added here.
# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
#
# For example, the following uses credentials from secret "myRegistryKeySecretName".
#
# imagePullSecrets:
# - name: myRegistryKeySecretName
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to form the service's public URL: [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg, webfrontend]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
I followed below guide
https://microsoft.github.io/AzureTipsAndTricks/blog/tip228.html
AZDS up is giving end point to my localhost
Service 'webfrontend' port 80 (http) is available via port forwarding at http://localhost:50597
Has your azds.yaml file ingress definition to the public 'webfrontend' domain?
Here is an example azds.yaml file created using .NET Core sample application:
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
More about it: https://learn.microsoft.com/pl-pl/azure/dev-spaces/how-dev-spaces-works-prep
How many service logs do you see in 'azds up' log, are you watching something similar to:
Service 'webfrontend' port 'http' is available at `http://webfrontend.XXX
Did you follow this guide?
https://learn.microsoft.com/pl-pl/azure/dev-spaces/troubleshooting#dns-name-resolution-fails-for-a-public-url-associated-with-a-dev-spaces-service
Do you have the latest version of the azds?

Pull azure container registry image in kubernetes helm

I'm working on a project using Helm-kubernetes and azure kubernetes service, in which I'm trying to use a simple node image which I have been pushed on azure container registry inside my helm chart but it returns ImagePullBackOff error.
Here are some details:
My Dockerfile:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]
My helm_chart/values.yaml:
replicaCount: 1
image:
registry: helmcr.azurecr.io
repository: helloworldtest
tag: 0.7
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: LoadBalancer
port: 32000
internalPort: 32000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: []
hosts:
- name: mychart.local
path: /
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
When I try to pull the image directly uasing the command below as:
docker pull helmcr.azurecr.io/helloworldtest:0.7
then it pulls the image successfully.
Whats can be wrong here?
Thanks in advance!
Your kubernetes cluster needs to be authenticated to the container registry to pull images, generally this is done by a docker secret:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
If you are using AKS, you can grant cluster application id pull rights to the registry, that is enough.
Reading: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Kub Cluster on Azure Container Service routes to 404, while my docker image works fine in my local?

I've created a docker image on centOS by enabling systemd services and built my image. I created docker-compose.yml file and docker-compose up -d and the image gets built and I can hit my application at localhost:8080/my/app.
I was using this tutorial - https://carlos.mendible.com/2017/12/01/deploy-your-first-service-to-azure-container-services-aks/.
So after I'm done with my docker image, I deployed my Image to Azure Container Registry and then created Azure Container Service (AKS Cluster). Then deploying that same working docker image on to AKS cluster and I get 404 page not found, when I'm trying to access the load balancer public IP. I got into kubernetes machine and tried to curl localhost:8080/my/app, still 404.
I see my services are up and running without any issue inside the Kubernetes pod and configuration is pretty much same as my docker container.
Here is my Dockerfile:
#Dockerfile based on latest CentOS 7 image
FROM c7-systemd-httpd-local
RUN yum install -y epel-release # for nginx
RUN yum install -y initscripts # for old "service"
ENV container docker
RUN yum install -y bind bind-utils
RUN systemctl enable named.service
# webserver service
RUN yum install -y nginx
RUN systemctl enable nginx.service
# Without this, init won't start the enabled services and exec'ing and starting
# them reports "Failed to get D-Bus connection: Operation not permitted".
VOLUME /run /tmp
# Don't know if it's possible to run services without starting this
ENTRYPOINT [ "/usr/sbin/init" ]
VOLUME ["/sys/fs/cgroup"]
RUN mkdir -p /myappfolder
COPY . myappfolder
WORKDIR ./myappfolder
RUN sh ./setup.sh
WORKDIR /
EXPOSE 8080
CMD ["/bin/startServices.sh"]
Here is my Docker-Compose.yml
version: '3'
services:
myapp:
build: ./myappfolder
container_name: myapp
environment:
- container=docker
ports:
- "8080:8080"
privileged: true
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
command: "bash -c /usr/sbin/init"
Here is my Kubectl yml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- args:
- bash
- -c
- /usr/sbin/init
env:
- name: container
value: docker
name: myapp
image: myapp.azurecr.io/newinstalled_app:v1
ports:
- containerPort: 8080
args: ["--allow-privileged=true"]
securityContext:
capabilities:
add: ["SYS_ADMIN"]
privileged: true
#command: ["bash", "-c", "/usr/sbin/init"]
imagePullSecrets:
- name: myapp-test
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myapp
I used these commands -
1. az group create --name resource group --location eastus
2. az ask create --resource-group rename --name kubname --node-count 1 --generate-ssh-keys
3. az ask get-credentials --resource-group rename --name kubname
4. kubectl get cs
5. kubectl cluster-info
6. kubectl create -f yamlfile.yml
7. kubectl get po --watch
8. kubectl get svc --watch
9. kubectl get pods
10. kubectl exec -it myapp-66678f7645-2r58w -- bash
entered into pod - its 404.
11. kubectl get svc -> External IP - 104.43.XX.XXX:8080/my/app -> goes to 404.
But my docker-compose up -d -> goes into our application.
Am I missing anything?
Figured it out. I need to have loadbalancer pointing to 80 and destination port to 8080.
That's the only change I made and things started working fine.
Thanks!

Resources