Deploy Azure Functions as IoT Edge modules - azure

I have been trying to deploy an IoT Edge module to my IoT Edge device using the following link:
https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-deploy-function
Everything appears to be working fine, however, when I right-click the deployment.template.json file and select Build IoT Edge solution I get the following output:
PS C:\Users\Carlton\Documents\OnAzureFunction\EdgeSolutionAF> docker build --rm -f "c:\Users\Carlton\Documents\OnAzureFunction\EdgeSolutionAF\modules\edgeonAzureF\Dockerfile.amd64" -t carlscontainer.azurecr.io/edgeonazuref:0.0.1-amd64 "c:\Users\Carlton\Documents\OnAzureFunction\EdgeSolutionAF\modules\edgeonAzureF" ; if ($?) { docker push carlscontainer.azurecr.io/edgeonazuref:0.0.1-amd64 }
Sending build context to Docker daemon 12.29kB
Step 1/3 : FROM mcr.microsoft.com/azureiotedge-functions-binding:1.0.0-linux-amd641.0.0-linux-amd64: Pulling from azureiotedge-functions-bindingimage operating system "linux" cannot be used on this platformPS C:\Users\Carlton\Documents\OnAzureFunction\EdgeSolutionAF>
As you can see Step 1/3 appears to fail.
error
What should happen is visual Studio Code first takes the information in the deployment template and generates a deployment.json file in a new config folder. Then it runs two commands in the integrated terminal: docker build and docker push. These two commands build your code, containerize the functions, and the push it to the container registry that you specified when you initialized the solution. However, as you can the docker push does not send it to the container registry.

Which container is running on the docker host,Linux container or Windows contianer?
You should switch the container to Linux Container(right click the docker icon in task bar -> Switch to Linux containers).

Related

Pushing custome images to Azure ACR fails " Use of closed network connection "

While learning how to use Azure Container Registry with the official tutorial : https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli
I tried to push images to my registry, the Hello World image in the course works fine, but when i try to use my own images it fails. It also fails when i pull images from docker and try to push them to my Azure registry.
Of course, the images are correctly tagged and the CLI connection works fine.
i'm also following another Azure course in which i build the image with Github actions ( https://learn.microsoft.com/en-us/azure/aks/kubernetes-action ), it also works great on the repo of this course, but once i try with my own projects, it fails. This time the error is about the url / the credentials :
After investigations, i'm sure that the credentials are correct, but the URL is maybe false because it never create it. That's why i was trying to push it manually in the first place.
EDIT : I managed to make it work by changing the wifi source i used, but i still don't understand how is this possible, why it doesn't work on github actions and what should i change in my conf to make it work with the original wifi again.
I tried to reproduce the same issue in my environment and got the below output
I have created the docker file and write the some script
vi dockerfile
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
I have build the docker file using below command
docker build -t my-apache2 .
I have run the Image id using the below command
docker run -d -p 80:80 image_id
Created the container registry
After creating the registry we should enable the Access key if not we will not able to fetch the image to container instances
I have logged into the registry server
docker login login_server
Username:XXXX
password:XXXXX
After succeeded I have pushed the image into the container registry
I have tagged the image and pushed into the registry
docker tag image_name login_server/image_name
docker push login_server/image_name
Here we can find the image in repositories which we have pushed
I have created the container instance, while creating we have to give the Image resource as container registry then only we will get the pushed image

Azure ACR Tasks API? Have an application running in docker container that needs to to build and push images to ACR

Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.
So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.
Anyone know of different methods to achieve this?
I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.
I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using Azure ACR Quick Tasks to achieve something similar. If you're just trying to do the equivalent of docker build and docker push, Quick Tasks should work fine for you too.
For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.
First make sure you are in the Dockerfile directory and then:
Authenticate to the Azure CLI using az login
Authenticate to your ACR using az acr login --name myacr.
Replace the values accordingly and run az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .
Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under services>tasks>runs. You should see every line of the Docker build task appear there.
Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a Dedicated Agent Pool and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.
In my case, I'm using terraform to run the az acr build command and you can see the Dockerfile executes the COPY commands without any issues.

self-hosted agent is not working for docker container

Problem:
We are trying to run self-hosted agent on my Windows 10 (Enterprise) machine using docker-container approach as explained in article. We can create docker image successfully (for Windows) as explained in mentioned article but while executing the created image with run command we are getting below error. We tried to google it but didn’t find any resolution.
Error:
Determining matching Azure Pipelines agent...
Invoke-RestMethod : The remote name could not be resolved: 'dev.azure.com'
Steps Followed:
Installed docker engine on my Windows 10 laptop
Followed instructions mentioned in aforementioned article and able to create docker image with docker build command.
But while running below command to run created docker image, we are getting above error.
docker run -e AZP_URL="https://dev.azure.com/MyOrg/" -e AZP_TOKEN="XXXXXXXXXXXXXXXXXXXXXXXXXX" -e AZP_AGENT_NAME="LocalSelfHostTest1" -e AZP_POOL="LocalSelfHostTest" dockeragent:latest
XXXXXXXXX – PAT generated for my project.
We’ll appreciate your help.
Regards
arvind

The usage of Docker base images of Azure Functions

I'm new to both Docker and Azure Functions so it must be a silly question...
You can pull the images of Azure Functions from Docker Hub, like:
docker pull mcr.microsoft.com/azure-functions/node:3.0-node12
Now I pulled the image of a specific runtime of Azure Functions, but what can I do with this exactly?
First I thought I could find Azure Functions Core Tools inside of the container, then found the azure-function-host directory with bunch of files, but I'm not sure what it is.
docker exec -it "TheContainerMadeOfAzureFunctionsImage" bash
-> FuncExtensionBundles azure-functions-host bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Thank you in advance.
You can install the remote development extension tools for VSCode and the Azure Functions extension.
Create your local folder then using the remote development tools, open that folder inside a container from the command pallette by selecting 'Reopen In Container'
Reopen In Container Image
Then select your definition.
Remote Dev Tools Image
This actually use those base images you mentioned.
It will create a .devcontainer hidden directory in your repo where it stores the container information and saves you having to install the Function Core tools/NPM or anything else on your local machine.
It automatically forwards the required ports for local debugging and you can push the devcontainer definitions to source control so that others can use your definition with the project.
Last week I solved it myself. I found the exact image in Docker Hub, then docker pull mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools and that's it.
You can find a full list of available tags for each runtime.
In container you can run both Azure Functions Core Tools and a language runtime (like Node.js or Python, etc.) and of course you can create function apps.
With port-forwarding like docker run -it -p 8080:7071 --name container1 mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools bash you can debug your functions running inside a container (which uses port 7071) from your local machine, by sending HTTP requests to localhost:8080. This is somewhat brute force but I'm happy.

spring-boot-kube-deployment-port80-3467990654-5c8nl 0/1 CrashLoopBackOff

Steps followed during rolling updates:
Create an image for the v2 version of the application with some changes
Re-Build a Docker Image with Maven. pom.xml. Run command in SSH or Cloud Shell:
docker build -t gcr.io/satworks-1/springio/gs-spring-boot-docker:v2 .
Push the new updated docker image to the Google Container Registry. Run command in SSH or Cloud Shell
gcloud docker -- push gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Apply a rolling update to the existing deployment with an image update. Run command in SSH or Cloud Shell
kubectl set image deployment/spring-boot-kube-deployment-port80 spring-boot-kube-deployment-port80=gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Revalidate the application again through curl or browser
curl 35.227.108.89
and observe the changes take effect.
When do we come across the "CrashLoopBackOff" error and how can we resolve this issue? Does it happen at application level or at kubernetes pods level?

Resources