How to read TAP report from another server in Jenkins? - node.js

I've setup Jenkins to run unit test on NodeJS and deploy it to another servers if the test coverage match with my condition.
I use AWS with 2 instances to host Jenkins and Apps. Below is steps that I follow:
Setup Jenkins on instance 1.
Launch Jenkins and configure the build step
At build step, I ssh to instance 2.
cd to src folder at instance 2 and git pull my repository.
Run the unit test using Istanbul and export to test.TAP, now the test.TAP is placed in instance 2.
Back to Jenkins in instance 1, I configure Publish TAP result on Post-build Actions.
<-- My concerns right here is how can I get the test.tap file in instance 2 to read the report and display in Jenkin?
Please help me.
Thank you.

Related

Google Cloud Run Second Flask Application - requirements.txt issue

I have a google cloud run flask application named "HelloWorld1" already up and running however i need to create a second flask application. I followed the below steps as per documentation:
1- On "Cloud Shell Editor" clicked "<>Cloud Code" --> "New Application" --> "Cloud Run Application Basic Cloud Run Application .."-->"Python (Flask): Cloud Run", provide and new folder and application is created.
2- When i try to run it using "Run on Cloud Run Emulator" i get the following error:
Starting to run the app using configuration 'Cloud Run: Run/Debug Locally' from .vscode/launch.json...
To view more detailed logs, go to Output channel : "Cloud Run: Run/Debug Locally - Detailed"
Dependency check started
Dependency check succeeded
Starting minikube, this may take a while...................................
minikube successfully started
The minikube profile 'cloud-run-dev-internal' has been scheduled to stop automatically after exiting Cloud Code. To disable this on future deployments, set autoStop to false in your launch configuration /home/mian/newapp/.vscode/launch.json
Update initiated
Update failed with error code DEVINIT_REGISTER_BUILD_DEPS
listing files: file pattern [requirements.txt] must match at least one file
Skaffold exited with code 1.
Cleaning up...
Finished clean up.
I tried following:
1- tried to create different type of application e.g django instead of flask however always getting the same error
2- tried to give full path of [requirements.txt] in docker settings, no luck.
Please if someone help me understanding why i am not able to run a second cloud run Flask app due to this error?
It's likely that your Dockerfile references the 'requirements.txt' file, but that file is not in your local directory. So, it gives the error that it's missing:
listing files: file pattern [requirements.txt] must match at least one file

How to handle test tags/karate options from ci/cd gitlab?

I was trying to run the test from CI/CD gitlab runner file but it is causing issue while executing from gitlab.
I have sucessfully executed the test locally using the karate option
Working fine in Local Run:
mvn test -Dkarate.env=stg +-DKarate.options=--tags #Ui" -Dtest.run.mode=localtest -Dtest.run.group=OKCUtest -Dtest=OKCUtest -Dtest.gitlabRunner=false -DbuildDirectory=stg-target/OKCUtest -Dtest.run.testSource=localtest
There are 5 test feature files which were executed using the #Api tags and now I have identified that one should be #Ui and changed the respective feature file and created the new pipeline OKCU-UI and have updated the command line syntax to address #Ui tests.
can you try this command ?
mvn test -Dkarate.options="--tags ~#Ui"
if still not try same command with version 0.9.6.RC3

Shopware Administation build hangs

Shopware version: 6.4.12.0
I want to build the administration via the ./bin/build-administration.sh command. On my local setup it works great in the docker container but if I want to build it on the server the build hangs forever on this message:
Calling reporter service for single check.
I already tried to set this in the ./bin/build-administration.sh:
export DISABLE_ADMIN_COMPILATION_TYPECHECK=1
but then it after the injection of the plugins.
Any idea's? Thanks!

Want Jenkins pipeline script to create docker container with test database, test against, it, destroy container

I've created a git repo for application (A) that contains a Dockerfile and docker-compose.yml that stands up a postgres database and creates and populates some tables. I use this as a support app for testing purposes during development as a disposable database.
I'd like to use this docker app in a Jenkins pipeline for testing my main application (B), which is a NodeJS app that reads and writes to the database. Application B is also in git and I want to use a Jenkins pipeline to run its tests (written in Mocha). So my overall pipeline logic would be something like this:
Triggering Event:
Code for application B is pushed to some branch (feature or master) to git.
Pipeline:
git checkout code for Application B (implicit)
git checkout code for Application A (explicitly)
cd to Application A directory:
docker-compose up -d // start postgres container
cd's to Application B directory:
npm install
npm run test (kicks off my Mocha tests that expect postgres db with localhost:5432 url)
cd to Application A directory
docker-compose down // destroy postgres container
// if tests pass, deploy application B
I'm trying to figure out the best way to structure this. I'm really checking out code from two repos: The one I want to test and build, and another repo that contains a "support" application for testing, essentially mocking my real database.
Would I use a script or declarative pipeline?
The pipeline operates in a workspace directory for application B that is implicitly checked out when the pipeline is triggered. Do I just checkout the code for Application A within this workspace and run docker commands on it?

GitLab CI/CD pull code from repository before building ASP.NET Core

I have GitLab running on computer A, development environment (Visual studio Pro) on computer B and Windows Server on computer C.
I set up GitLab-Runner on computer C (Windows server). I also set up .gitlab-ci.yml file to perform build and run tests for ASP.NET Core application on every commit.
I don't know how can I get code on computer C (Windows server) so I can build it (dotnet msbuild /p:Configuration=Release "%SOLUTION%"). It bothers me that not a single example .gitlab-ci.yml I found on net, doesn't pull code form GitLab, before building application. Why?
Is this correct way to set-up CI/CD:
User create pull request (a new branch is created)
User writes code
User commit code to branch from computer B.
GitLab runner is started on computer C.
It needs to pull code from current branch (CI_COMMIT_REF_NAME)
Build, test, deploy ...
Should I use common git command to get the code, or is this something GitLab runner already do? Where is the code?
Why no-one pull code from GitLab in .gitlab-ci.yml?
Edited:
I get error
'"git"' is not recognized as an internal or external command
. Solution in my case was restart GitLab-Runner. Source.
#MilanVidakovic explain that source is automatically downloaded (which I didn't know).
I just have one remaining problem of how to get correct path to my .sln file.
Here is my complete .gitlab-ci.yml file:
variables:
SOLUTION: missing_path_to_solution #TODO
before_script:
- dotnet restore
stages:
- build
build:
stage: build
script:
- echo "Building %CI_COMMIT_REF_NAME% branch."
- dotnet msbuild /p:Configuration=Release "%SOLUTION%"
except:
- tags
I need to set correct variable for SOLUTION. My dir (where GitLab-Runner is located) currently holds this folder/files:
- config.toml
- gitlab-runner.exe
- builds/
- 7cab42e4/
- 0/
- web/ # I think this is project group in GitLab
- test/ # I think this is project name in GitLab
- .sln
- AND ALL OTHER PROJECT FILES #Based on first look
- testm.tmp
So, what are 7cab42e4, 0. Or better how to get correct path to my project structure? Is there any predefined variable?
Edited2:
Answer is CI_PROJECT_DIR.
I'm not sure I follow completely.
On every commit, Gitlab runner is fetching your repository to C:\gitlab-runner\builds.. on the local machine (Computer C), and builds/deploys or does whatever you've provided as an action for the stage.
Also, I don't see the need for building the source code again. If you're using Computer C for both runner and tests/acceptance, just let the runner do the building and add Artifacts item in your .gitlab-ci.yaml. Path defined in artifacts will retain your executables on Computer C, which you are then able to use for whatever purposes.
Hope it helps.
Edit after comment:
When you push to repository, Gitlab CI/CD automatically checks your root folder for .gitlab-ci.yaml file. If its there, the runner takes over, parses the file and starts executing jobs/stages.
As soon as the file itself is valid and contains proper jobs and stages, runner fetches the latest commit (automatically) and does whatever script item tells it to do.
To verify that everything works correctly, go to your Gitlab -> CI / CD -> Pipelines, and check out whats going on. You should see something like this:
Maybe it would be best if you posted your .yaml file, there could be a number of reasons your runner is not picking up the code. For instance, maybe your .yaml tags are not matching what runner is created to pick up etc.

Resources