With a self-hosted agent, I m getting following error at the end of VSTS build pipeline.
2018-10-04T12:11:10.3334402Z ##[error]unauthorized: authentication required
2018-10-04T12:11:10.3447576Z ##[error]/snap/bin/docker failed with return code: 1
The point is I want to push Docker image to Azure Container Registry, using Docker extension.
When I perform exactly same pipeline on my Ubuntu machine, where agent builds, just with Docker commands, push succeeds. Build succeeds and repository is pushed to cloud.
How can I authenticate Docker for my build agent, to be able to push?
Related
I have a dedicated server with a private docker registry setup so I can push and pull images. I can connect to this server via docker login <my_domain>. I need to build an image using Azure Pipelines and push it to my registry, but when I try to make a docker connection, there is no way to access the private registry. Only docker hub, azure registry and "other" which still require docker id and password. Is there a way to connect Azure to my registry?
Ok, i figured out myself. It's not possible in this task, but u can change task from "Docker build" to "Bash script" and do docker login <your_domain>:<port> -u <your_username> -p <your_password> and then run anything you want, here specifically docker build.
I am following the tutorial located here. I am able to get a self hosted agent running in a Docker container. After the agent is running, I am able to run jobs on it in a pipeline only while the container is running. I would like to keep this docker container build agent running as a service, so I don't have to start it up for each time I am executing a pipeline. Any advice on how to configure a docker container build agent to keep running continuously would be helpful.
I am able to run jobs on it in a pipeline only while the container is
running.
Agent in Docker should be 'run as a service' by default, you need to make sure the container is running, otherwise, the agent will not run.
I have a Gitlab server from the company where the project and the pipeline are configured. By default, every time a commit is done, the pipeline starts to execute in the Gitlab server.
I have my personalized VM, which is completely different from Gitlab. I want that the pipeline will be executed in my personalized VM instead of the Gitlab server. What should I do so that the pipeline runs on the VM and not on the Gitlab server?
I have configured the following runner in config.toml that is located in $MYPROJECT/:
[[runners]]
name = "Project-name"
url = "https://gitlab.server/"
token = "TOKEN ID"
executor = "shell"
shell = "bash"
There are things that I don't understand.
If I want to execute the pipeline in my personalized VM, should I install Gitlab runner in the VM [1]?
Should I have the project source code in the VM so that it can read the config.toml file every time there is a commit?
If I register the runner with the token key in the Gitlab server, how the Gitlab server knows that the pipeline is to be executed in the VM and not in the server [2]?
Should I use the executor docker or shell, to execute the pipeline in the VM?
[1] https://docs.gitlab.com/runner/install/linux-manually.html
[2] https://docs.gitlab.com/runner/register/#registering-runners
For running a job on a machine you need a GitLab Runner installed on that machine, connected with the GitLab server.
The project source code is fetched automatically in front of every run
You can use a tag (e.g. "MyVM") when registering the runner. Then you can set the same tag into your job so that this job is only executed by this runner. See: https://docs.gitlab.com/ee/ci/runners/configure_runners.html#use-tags-to-control-which-jobs-a-runner-can-run
You need to use docker if you want to use docker in your VM (which needs to be installed before there). Otherwise use shell.
Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.
So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.
Anyone know of different methods to achieve this?
I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.
I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using Azure ACR Quick Tasks to achieve something similar. If you're just trying to do the equivalent of docker build and docker push, Quick Tasks should work fine for you too.
For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.
First make sure you are in the Dockerfile directory and then:
Authenticate to the Azure CLI using az login
Authenticate to your ACR using az acr login --name myacr.
Replace the values accordingly and run az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .
Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under services>tasks>runs. You should see every line of the Docker build task appear there.
Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a Dedicated Agent Pool and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.
In my case, I'm using terraform to run the az acr build command and you can see the Dockerfile executes the COPY commands without any issues.
This is the output I get by clicking show complete raw because the normal view is just 1 black line (no log inside).
I tried what https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/ci/ssh_keys/README.md says, but with no luck.
I have a server with two docker containers: one with gitlab, the other with gitlab-runner. I have tested with shared runner and with a specific runner (like the code I show below). In both cases is the same: it fails because it cannot fetch my repository.
It only works (both shared and specific runners) if I set my repo public (neither private, nor internal). And I would like to work with a private repo.
[0KRunning with gitlab-runner 10.1.0 (c1ecf97f)
on runner-myrepo-js (900c71a4)
[0;m[0KUsing Docker executor with image node:latest ...
[0;m[0KUsing docker image sha256:46da793b60ee30f8df451729e315f531f0acd24a92c8fba10401513530beff99 for predefined container...
[0;m[0KPulling docker image node:latest ...
[0;m[0KUsing docker image node:latest ID=sha256:cbea0ebe4f3cf07daecf22b196ba6b5d76d1fe1b6548098e5bc9dd9b0736f262 for build container...
[0;msection_start:1509656546:prepare_script
[0KRunning on runner-900c71a4-project-32-concurrent-0 via a16b07f5f84b...
section_end:1509656548:prepare_script
[0Ksection_start:1509656548:get_sources
[0K[32;1mCloning repository...[0;m
Cloning into '/builds/gitlab/myuser/myrepo-js'...
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'https://example.com/gitlab/myuser/myrepo-js.git/'
section_end:1509656549:get_sources
[0K[31;1mERROR: Job failed: exit code 1
[0;m
What should I do?
I have both gitlab and gitlab-runner in version 10.
We were struggling today and definetely it is a problem of SSL:
Our frontend used SSL certificates, but the backend (gitlab) used plain HTTP (behind a proxy). This broke the real-time log of jobs and the access to private repos from runners.
Adding the SSLs and forcing HTTPS in the backend solved the problem.