I have been trying really hard to create a docker image to be used in the fargate task so this will serves as runners for us but it is really impossible to do
i followed this guid https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/
but if i want to create my own docker image (to replace the exemple in this guid )
the manger EC2 runner is just unable to connect to the task no matter what i tried
Please assist
I think that my problem is that i am not using the correct machnizem to create SSH keys
what i did is creating SSH keys on the manager EC2 instance and i am passing the public key to the the Fargate task
with an environment variable on the task
The "registry.gitlab.com/tmaczukin-test-projects/fargate-driver-debian:latest" Image is some how creating the ENV var by itself
and i have no idea how and how to do it on my custom image
I found the problem
first of all i was building the image on the apple chip which is arm64 and the fargate is running on amd64 so i needed to run the commend :
"docker buildx build --platform linux/amd64 -f ./Dockerfile -t repo-name:tag ."
so this is one issue
the other issue is that i didnt give the task execution rile the permissions to pull from the AWS ECR service
thanks for all the assistance
Related
While learning how to use Azure Container Registry with the official tutorial : https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli
I tried to push images to my registry, the Hello World image in the course works fine, but when i try to use my own images it fails. It also fails when i pull images from docker and try to push them to my Azure registry.
Of course, the images are correctly tagged and the CLI connection works fine.
i'm also following another Azure course in which i build the image with Github actions ( https://learn.microsoft.com/en-us/azure/aks/kubernetes-action ), it also works great on the repo of this course, but once i try with my own projects, it fails. This time the error is about the url / the credentials :
After investigations, i'm sure that the credentials are correct, but the URL is maybe false because it never create it. That's why i was trying to push it manually in the first place.
EDIT : I managed to make it work by changing the wifi source i used, but i still don't understand how is this possible, why it doesn't work on github actions and what should i change in my conf to make it work with the original wifi again.
I tried to reproduce the same issue in my environment and got the below output
I have created the docker file and write the some script
vi dockerfile
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
I have build the docker file using below command
docker build -t my-apache2 .
I have run the Image id using the below command
docker run -d -p 80:80 image_id
Created the container registry
After creating the registry we should enable the Access key if not we will not able to fetch the image to container instances
I have logged into the registry server
docker login login_server
Username:XXXX
password:XXXXX
After succeeded I have pushed the image into the container registry
I have tagged the image and pushed into the registry
docker tag image_name login_server/image_name
docker push login_server/image_name
Here we can find the image in repositories which we have pushed
I have created the container instance, while creating we have to give the Image resource as container registry then only we will get the pushed image
I've pushed an Image (which is a version of R + some libraries) on my private Azure Container Registry. How can build an Image starting from this Image?
In other words, I want to do "FROM registry/env:version" but I'm pretty sure that I need to use other settings to access my repository.
Thanks for help!
You should login your Docker daemon to your Azure Container Registry using the following command : docker login myregistry.azurecr.io --username $SP_APP_ID --password $SP_PASSWD
Then, using the fully qualified path for your image in the Dockerfile should work automatically, as long as the identity provided in the first step (login) has the rights to pull this image.
Sorry I'm trying to figure out your answer, I'm trying to pull a docker image from my Azure Container Registry and build it and push it back to a new repository. I'm starting my Dockerfile as
FROM xxxxxx.azurecr.io/php-7.4:latest AS compiled
how to configure the docker daemon in azure pipelines world?
Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.
So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.
Anyone know of different methods to achieve this?
I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.
I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using Azure ACR Quick Tasks to achieve something similar. If you're just trying to do the equivalent of docker build and docker push, Quick Tasks should work fine for you too.
For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.
First make sure you are in the Dockerfile directory and then:
Authenticate to the Azure CLI using az login
Authenticate to your ACR using az acr login --name myacr.
Replace the values accordingly and run az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .
Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under services>tasks>runs. You should see every line of the Docker build task appear there.
Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a Dedicated Agent Pool and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.
In my case, I'm using terraform to run the az acr build command and you can see the Dockerfile executes the COPY commands without any issues.
Problem:
We are trying to run self-hosted agent on my Windows 10 (Enterprise) machine using docker-container approach as explained in article. We can create docker image successfully (for Windows) as explained in mentioned article but while executing the created image with run command we are getting below error. We tried to google it but didn’t find any resolution.
Error:
Determining matching Azure Pipelines agent...
Invoke-RestMethod : The remote name could not be resolved: 'dev.azure.com'
Steps Followed:
Installed docker engine on my Windows 10 laptop
Followed instructions mentioned in aforementioned article and able to create docker image with docker build command.
But while running below command to run created docker image, we are getting above error.
docker run -e AZP_URL="https://dev.azure.com/MyOrg/" -e AZP_TOKEN="XXXXXXXXXXXXXXXXXXXXXXXXXX" -e AZP_AGENT_NAME="LocalSelfHostTest1" -e AZP_POOL="LocalSelfHostTest" dockeragent:latest
XXXXXXXXX – PAT generated for my project.
We’ll appreciate your help.
Regards
arvind
I had to perform these steps to deploy my Nodejs/Angular site to AWS via DockerCloud
Write Dockerfile
Build Docker Images base on my Dockerfiles
Push those images to Docker Hub
Create Node Cluster on DockerCloud Account
Write Docker stack file on DockerCloud
Run the stack on DockerCloud
See the instance running in AWS, and can see my site
If we require a small thing changes that require a pull from my project repo.
BUT we already deployed our dockers as you may know.
What is the best way pull those changes into the Docker containers that already deployed ?
I hope we don’t have to :
Rebuild our Docker Images
Re-push those images to Docker Hub
Re-create our Node Cluster on DockerCloud
Re-write our docker stack file on DockerCloud
Re-run the stack on DockerCloud
I was thinking
SSH into a VM that has the Docker running
git pull
npm start
Am I on the right track?
You can use docker service update --image https://docs.docker.com/engine/reference/commandline/service_update/#options
I have not experience with AWS but I think you can build and update automatically.
If you want to treat a Docker container as a VM, you totally can, however, I would strongly caution against this. Anything in a container is ephemeral...if you make changes to files in it and the container goes down, it will not come back up with the changes.
That said, if you have access to the server you can exec into the container and execute whatever commands you want. Usually helpful for dev, but applicable to any container.
This command will start an interactive bash session inside your desired container. See the docs for more info.
docker exec -it <container_name> bash
Best practice would probably be to update the docker image and redeploy it.