How would you setup default dockerfile per project with an Azure pipeline to handle them? - azure

As of now I have a simple solution structure(WebAPI projects, which are gonna be microservices) with default generated dockerfiles for each project in the solution, like:
Solution X
| Project A
| | Dockerfile
| Project B
| | Dockerfile
| Project C
| | Dockerfile
| Project D
| | Dockerfile
| azure-pipeline.yml
From the development and debuggind point of view everything works(through "Docker" as launcher), but after creating with the Azure wizard the first pipeline for the "Project A" my build always fails at a COPY instruction at the build step:
COPY ["Project A/ProjectA.csproj", "Project A/"]
With the error from the pipeline run as:
COPY failed: stat /var/lib/docker/tmp/docker-builder196561826/Project A/ProjectA.csproj: no such file or directory
##[error]COPY failed: stat /var/lib/docker/tmp/docker-builder196561826/Project A/ProjectA.csproj: no such file or directory
Am not an expert in docker neither in azure but I guess I'm setting up this solution in the wrong way to accomplish such thing.
What could be a better setup or fix?

no such file or directory
This is a very common error people encountered after they migrate the Docker project from Visual studio into Azure Devops, even they can build docker very successfully in local.
This caused by the different work logic between Visual Studio(local) and Azure Devops. For local, the docker runs at the Repos/solution level. BUT, for Azure devops CI, it running the docker in the directory where the dockerfile lives, which is at project level. At this time, the relevant path which work fine locally, will not suitable in Azure devops any more.
I guess you may not want to make changes to your dockerfile. So here you just need specify the build context in Docker task:
Specify $(Build.Repository.LocalPath) to the Docker 2.* argument Build context.
Check my previous answer.

Related

Want Jenkins pipeline script to create docker container with test database, test against, it, destroy container

I've created a git repo for application (A) that contains a Dockerfile and docker-compose.yml that stands up a postgres database and creates and populates some tables. I use this as a support app for testing purposes during development as a disposable database.
I'd like to use this docker app in a Jenkins pipeline for testing my main application (B), which is a NodeJS app that reads and writes to the database. Application B is also in git and I want to use a Jenkins pipeline to run its tests (written in Mocha). So my overall pipeline logic would be something like this:
Triggering Event:
Code for application B is pushed to some branch (feature or master) to git.
Pipeline:
git checkout code for Application B (implicit)
git checkout code for Application A (explicitly)
cd to Application A directory:
docker-compose up -d // start postgres container
cd's to Application B directory:
npm install
npm run test (kicks off my Mocha tests that expect postgres db with localhost:5432 url)
cd to Application A directory
docker-compose down // destroy postgres container
// if tests pass, deploy application B
I'm trying to figure out the best way to structure this. I'm really checking out code from two repos: The one I want to test and build, and another repo that contains a "support" application for testing, essentially mocking my real database.
Would I use a script or declarative pipeline?
The pipeline operates in a workspace directory for application B that is implicitly checked out when the pipeline is triggered. Do I just checkout the code for Application A within this workspace and run docker commands on it?

Gitlab CI Web Deployment

So we are currently moving away from our current deployment provider: Beanstalk, which is great but we are on the top tier and we keep running out of space or hitting our repository limits. So we are moving away so please do not suggest any other SaaS provider.
I personally use Gitlab for my own projects and a few company projects and it's amazing we use a self hosted version on our local server in our company building.
We have CI setup and currently are using the following deployment code (I have minified the bits just to the deployment for development) - this uses the shell executer for deploying as we deploy to an existing linux server.
variables:
HOSTNAME: '<hostname>'
USERNAME: '<username>'
PASSWORD: '<password>'
PATH_DEV: '/path/to/www'
# Define the stages (we can add as many as we want)
stages:
# - build
- deploy
# The code for development deployment
deploy_dev:
stage: deploy
script:
- echo "Deploying to development environment..."
- rm .gitlab-ci.yml
- rsync -urltvz --filter=':- .gitignore' --exclude=".git" -e "sshpass -p"$PASSWORD" ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" * $USERNAME#$HOSTNAME:$PATH_DEV
- echo "Finished deploying."
environment:
name: Development
url: http://dev.domain.com
only:
- envdev
The Problem:
When we use the above code to deploy it's perfect and works really well, and it deploys all the code after optimisation etc, but we have found a little bug here.
When you delete a file then the rsync command will not delete the file, now I did some searching and found the --remove flag you can add, and it worked - but it deleted all the user uploaded content as well. Now I added the .gitignore in to the filtering, so it would ignore some the files in their (which are usually user generated) or configuration files or/and libraries (npm, etc.). This is fine until a user started uploading files using the media manager in our framework which stores in a folder that is not in the .gitignore file and it can't because it contains other files, as we also add our own files in there so they're editable by the user, so now I am unsure how to manage this.
What we are looking for is a CI setup, which will upload file changes to the server, so it would search through the latest commits, and find the latest files that have been changed and then push only them files up. Of course I would like to do this with the Gitlab CI still, so any ideas examples or tutorials would be amazing.
Thanks in advance.
~ Danny
May it helps: https://github.com/banago/PHPloy
Looks this tool designed for php project, but I think it can use other web deployment.
how it works:
PHPloy stores a file called .revision on your server. This file contains the hash of the commit that you have deployed to that server. When you run phploy, it downloads that file and compares the commit reference in it with the commit you are trying to deploy to find out which files to upload. PHPloy also stores a .revision file for each submodule in your repository.

Having nested pipelines in the same repository

I am working on a micro-services project, each service has its own pipeline because it gets deployed to a server of its own, we have each project in its own repository on gitlab with its own .gitlab-ci.yml but I want to collect all of these services in a single repository to make them easier to maintain and trigger a deployment of all the services when a commit is pushed.
The issue is I don't want to have a big fat yaml file that contains the build & deployment process of each service but instead keep the yaml files in the services folders and have a yaml file on the root that references them, i.e.:
| service1
| service1-code
| .gitlab-ci.yaml << build process for service1
| service2
| service2-code
| .gitlab-ci.yaml << build process for service2
| .gitlab-ci.yaml << reference to service1/yaml & service2/yaml
Is that doable?
There is currently no way for GitLab to do this, and there is an open issue to add this feature for monorepos.
(...) keep the yaml files in the services folders and have a yaml file on the root that references them
Just found this comment on GitLab by robindegen.
We create separate repositories, and a parent main repository that has
them as submodules. This works just fine for doing CI on subsets. When
we push to master on the main repo (to update the submodules), a full
CI run is done on everything, including integration tests.
I recon a CI clone includes submodules so this would just work. So if you already have a repo per project; have your cake and eat it too!

GitLab CI/CD pull code from repository before building ASP.NET Core

I have GitLab running on computer A, development environment (Visual studio Pro) on computer B and Windows Server on computer C.
I set up GitLab-Runner on computer C (Windows server). I also set up .gitlab-ci.yml file to perform build and run tests for ASP.NET Core application on every commit.
I don't know how can I get code on computer C (Windows server) so I can build it (dotnet msbuild /p:Configuration=Release "%SOLUTION%"). It bothers me that not a single example .gitlab-ci.yml I found on net, doesn't pull code form GitLab, before building application. Why?
Is this correct way to set-up CI/CD:
User create pull request (a new branch is created)
User writes code
User commit code to branch from computer B.
GitLab runner is started on computer C.
It needs to pull code from current branch (CI_COMMIT_REF_NAME)
Build, test, deploy ...
Should I use common git command to get the code, or is this something GitLab runner already do? Where is the code?
Why no-one pull code from GitLab in .gitlab-ci.yml?
Edited:
I get error
'"git"' is not recognized as an internal or external command
. Solution in my case was restart GitLab-Runner. Source.
#MilanVidakovic explain that source is automatically downloaded (which I didn't know).
I just have one remaining problem of how to get correct path to my .sln file.
Here is my complete .gitlab-ci.yml file:
variables:
SOLUTION: missing_path_to_solution #TODO
before_script:
- dotnet restore
stages:
- build
build:
stage: build
script:
- echo "Building %CI_COMMIT_REF_NAME% branch."
- dotnet msbuild /p:Configuration=Release "%SOLUTION%"
except:
- tags
I need to set correct variable for SOLUTION. My dir (where GitLab-Runner is located) currently holds this folder/files:
- config.toml
- gitlab-runner.exe
- builds/
- 7cab42e4/
- 0/
- web/ # I think this is project group in GitLab
- test/ # I think this is project name in GitLab
- .sln
- AND ALL OTHER PROJECT FILES #Based on first look
- testm.tmp
So, what are 7cab42e4, 0. Or better how to get correct path to my project structure? Is there any predefined variable?
Edited2:
Answer is CI_PROJECT_DIR.
I'm not sure I follow completely.
On every commit, Gitlab runner is fetching your repository to C:\gitlab-runner\builds.. on the local machine (Computer C), and builds/deploys or does whatever you've provided as an action for the stage.
Also, I don't see the need for building the source code again. If you're using Computer C for both runner and tests/acceptance, just let the runner do the building and add Artifacts item in your .gitlab-ci.yaml. Path defined in artifacts will retain your executables on Computer C, which you are then able to use for whatever purposes.
Hope it helps.
Edit after comment:
When you push to repository, Gitlab CI/CD automatically checks your root folder for .gitlab-ci.yaml file. If its there, the runner takes over, parses the file and starts executing jobs/stages.
As soon as the file itself is valid and contains proper jobs and stages, runner fetches the latest commit (automatically) and does whatever script item tells it to do.
To verify that everything works correctly, go to your Gitlab -> CI / CD -> Pipelines, and check out whats going on. You should see something like this:
Maybe it would be best if you posted your .yaml file, there could be a number of reasons your runner is not picking up the code. For instance, maybe your .yaml tags are not matching what runner is created to pick up etc.

"The directory name /app/Views/ is invalid" on ASP.NET Core deployment using docker

I followed this article to setup a performant ASP.NET Core deployment using Docker. This works until I try to start the container using docker run which calls dotnet MyAppName.dll There I got an exception, that the View path is not existing:
Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: The directory name /app/Views/ is invalid.
And thats true, cause in the folder created by dotnet publish area only dll files and no View folder:
user#server:/etc/jenkins/workspace/App# ll publish-output | grep View
-rwxr--r-- 1 root root 237K Aug 31 19:26 Microsoft.AspNetCore.Mvc.ViewFeatures.dll
I cant understand this, cause the View folder is included in the publishOptions like wwwroot too, which is also missing:
"publishOptions": {
"include": [
"wwwroot",
"Views/**/*.cshtml",
"Areas/**/*.cshtml"
]
},
I also tried "Views" instead of "Views/**/*.cshtml" but not working. In my understanding, those publishOptions should result in copying those folders to the publishing-directory when using dotnet publish.
What am I doing wrong?
I'm using the microsoft/aspnetcore-build:1.0.1 image for building and microsoft/aspnetcore:1.0.1 for starting the app like recommended as best practice in the article.
UPDATE
Seems to be a problem on linux only. My Win10 development machine works fine, there I get any view-folders published from the main app and areas as expected.
UPDATE #2 Using the examples on the aspnetcore-build repo on the docker hub, its not working too.
UPDATE #3 I created a new ASP.NET Core MVC project on my Windows 10 development machine using Visual Studio, then transferred it to the linux box: Not working, the views are missing.
UPDATE #4 Created a new app using dotnet new -t web directly on the linux box: Works like expected!
UPDATE #5 I ran dotnet new -t web on the Windows machine, moved the created folder to the linux server: Not working - Strange...
The problem was a missing space in the documentation before the dot, which should refer the current folder.
Wrong (1:1 copy from the description of the docker image)
RUN dotnet publish --output /out/. --configuration Release
Correct
RUN dotnet publish --output /out/ . --configuration Release
Here is the space missing: --output /out/{Space}.

Resources