What's the differences between the VSTS Build tasks of docker - azure-pipelines-build-task

What's the differences between the VSTS Build tasks of docker (with preview) and docker without preview?
It's said in the description that 'red' ones can be used with Docker or Azure Container Registry, is this an only difference?
Could they differentiate with docker/compose version or environment (e.g., one for windows, one for linux?)

Based on the source code of them, the difference between them is adding supported for Azure Container Registry, the Docker Registry Connection are the same.
You can setup a private build agent and add these tasks to a build definition and queue build with that build agent, and then these tasks will be downloaded to the _task folder (e.g. _work_task) and you can check it.

Related

Cloud Bitbucket Can I use the self-hosted runner docker image as a base and augment it?

NOTE: I'm an embedded programmer, so devops stuff is mildly mysterious to me and I might be using the wrong terms.
When creating my BitBucket self hosted runners, do I HAVE to use docker in docker, or can I take the self-hosted runner container image and add my required tools and licenses to it?
i.e. the docker command it gives me when I create a self-hosted runner has this in it: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner, can I just create my own Dockerfile image which uses that, and add my software packages, environment variables, etc. and invoke that instead of the original one?
Or do I necessarily need to do docker-in-docker?
As I mentioned in the beginning, a lot of the devops stuff is just what google/stackexchange tells me to do and thus vaguely cargo-cultish. Getting credentials and other stuff from the self-hosted runner image into my docker-in-docker image (without building credentials into the image) seems like its more work to me.
Thanks for any insight

Azure DevOps deployment pipeline - how do I find value to use for Linux image name?

I have an Azure DevOps pipeline for a .NET Core app. The pipeline currently specifies ubuntu-latest as the Linux image name to use when building the Docker image. I want to change it to the latest version of Alpine. Is there are reference anywhere to the values that need to be used? And what is that value actually referring to - is it a reference name for a Linux image in the Docker Container Registry?
Actually I'd misunderstood the purpose of the vmImageName - I though it was specifying the Linux image that would be used by the deployed container. It's actually specifying a name from the following table, that determines the Linux version that will be used by the Azure DevOps build agent VM. It's used as a way of specifying a Microsoft-hosted agent for the build, that is automatically updated and maintained by Microsoft. The values come from the following table:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#use-a-microsoft-hosted-agent
To change the OS used by the docker container when building the image, I had to change the tag for the .NET Core SDK and runtime images specified in my dockerfile, as follows:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-alpine3.12 AS build
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine3.12 AS runtime
The full tag listings, describing how to specify which OS will be used by the .NET Core images, is found on both of the following pages under the heading "Full Tag Listing":
Runtime: https://hub.docker.com/_/microsoft-dotnet-aspnet
SDK: https://hub.docker.com/_/microsoft-dotnet-sdk

Deploying docker images

I have a nodejs server app and a separate reacts client app.
I have created docker images for both and a docker compose at the top level to build and run both
I'm struggling to understand how I can deploy/host these somewhere?
Do I deploy both separate images to the docker register? Or is this a way of hosting this on it's own as an entire docker container?
If you've already built the docker images on local, you can use DockerHub for hosting the docker images. If you're using Github Actions this gist script can be helpful.
Docker Registry is storage for built images. Think it as location for compiled "binaries" if comparing regular software.
Regularly, you might have some kind of CI for your source code, and when you trigger it for example by committing into 'master' branch, new image is built on the CI. It can push it into registry for long term storing, or push it directly to your hosting server (or registry in your server).
You can configure your docker-compose to pull latest images from private registry, when you just rerun it in your server.
Basically, hosting happens when you just run docker-compose up in some server, if you have done required configurations. It really depends where you are going to host them.
Maybe helpful:
https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
https://medium.com/#stoyanov.veseline/pushing-docker-images-to-a-private-registry-with-docker-compose-d2797097751

Configuring Node.JS container in Rancher

I am attempting to deploy my first workload with rancher.
I am trying to edit the existing default rancher workload, after getting the rancher hello world example working.
I changed the docker image to node:10 and the port to 8080. I am not sure if I am able to do this directly from rancher, or if I need to create a docker image with my user in docker hub to do this.
I would like to have a generic image, and then add some additional configuration to rancher, so I can reuse these settings for other node.js projects.
I would like a base node.js container, and then add a parameter to checkout a specific branch of a specific project whenever the container boots for example. I am planning on getting this integrated with teamcity to deploy to the rancher containers whenever teamcity detects a new commit.
Doing this in stages, I would like to get a node:10 container within rancher up and running. Can this be done by simply adding node:10 as the image and setting the default port in the add port section? If so, what is the default port to use?
I have tried the above and I haven't been able to get the container to load, I get a connection refused when I try to access.
Yes, you can have different images. There are many projects which use this pattern.
For example you can check this repo: https://github.com/rocker-org/rocker
r-devel image is based on r-base image
https://github.com/rocker-org/rocker/blob/master/r-devel/Dockerfile#L4
This functionality is not specific to Rancher. After you have your containers packaged according to your needs, you can use Rancher to run them.

What is gitlab runner

I think I'm fundamentally missing something. I'm new to CI/CD and trying to set up my first pipeline ever with gitlab.
The project is a pre-existing PHP project.
I don't want to clean it up just yet, at the moment I've pushed the whole thing into a docker container and it's running fine talking to google cloud's mysql databases etc as it should locally and also on a remote google cloud testing VM.
The dream is to be able to push to the development branch, and then merge the dev banch into the test branch which then TRIGGERS automated tests (easy part), and also causes the remote test VM (hosted on google cloud), to PULL the newest changes, rebuild the image from the latest docker file (or pull the latest image from gitlab image register)... and then rebuild the container with the newest image.
I'm playing around with gitlab's runner but I'm not understanding what it's actually for, despite looking through almost all the online content for it.
Do I just install it in the google cloud VM, and then when I push to gitlab from my development machine.. the repo will 'signal' the runner (which is running on the VM, to execute a bunch of scripts (which might include git pull on the newest changes?).
Because I already pre-package my app into a container locally (and push the image to the image registry) do I need to use docker as my executor on the runner? or can i just use shell and shell the commands in?
What am I missing?
TLDR and extra:
Questions:
What is runner actually for,
where is it meant to be installed?
Does it care which directory it is run in?
If it doesn't care which directory it's run,
where does it execute it's script commands? At root?
If I am locally building my own images and uploading them to gitlab's registry,
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it? (Assuming the runner is runing on the remote VM).
What is runner actually for?
You have your project along with a .gitlab-ci.yml file. .gitlab-ci.yml defines what stages your CI/CD pipeline has and what to do in each stage. This typically consists of a build,test,deploy stages. Within each stage you can define multiple job. For example in build stage you may have 3 jobs to build on debian, centos and windows (in GitLab glossary build:debian, build:centos, build:windows). A GitLab runner clones the project read the gitlab-ci.yaml file and do what he is instructed to do. So basically GitLab runner is a Golang process that executes some instructed tasks.
where is it meant to be installed?
You can install a runner in your desired environment listed here. https://docs.gitlab.com/runner/install/
or
you can use a shared runner that is already installed on GitLab's infrastructure.
Does it care which directory it is run in?
Yes. Every task executed by runner is relativly to CI_PROJECT_DIR defined in https://gitlab.com/help/ci/variables/README. But you can alter this behaviour.
where does it execute it's script commands? At root?
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it?
A runner can have mutiple executors such as docker, shell, virtualbox etc but docker being the most common one. If you use docker as the executor you can pull any image from docker hub or your configured registry and you can do loads of stff with docker images. In a docker environment normally you run them as the root user.
https://docs.gitlab.com/runner/executors/README.html
See gitlab access logs , runner is constantly polling the server

Resources