A centralized pipeline is collecting jest-allure artifacts from other jenkins pipelines, aggregating them and generating an allure page.
A single artifact is a allure-results folder with various xml files inside generated by running jest.
The centralized pipeline is grabbing the content from each individual artifact and copying them inside a single allure-results folder.
The issue is the final aggregated allure page is not able to differentiate from where those results came from.
Trying to implement allure categories so I can sort them out based upon from where they where initially generated.
How would u implement that?
Related
I'm trying to build a ML pipeline using Alteryx. I'll be pulling data via API and build an automated workflow. But first, until I get license and make sure, I'd like to use flat files to build the pipeline. The data in flat files and data from API (batch) would essentially be the same. Can I develop the full pipeline first and then swap the ingestion portion later with API connector?
I have searched online but haven't found an answer to this.
I have a number of solutions, each of which have a mixture of applications and libraries. Generally speaking, the applications get built and deployed, and the libraries get published as NuGet packages to our internal packages feed. I'll call these "apps" and "nugets."
In my Classic Pipelines, I would have one build for the apps, and one for the nugets. With path filters, I would indicate folders that contain the nuget material, and only trigger the nuget build if those folders had changes. Likewise, the app build would have path filters to detect if any app code had changed. As a result, depending on what was changed in a branch, the app build might run, the nuget build might run, or both might run.
Now I'm trying to convert these to YAML. It seems we can only have one pipeline set up for CI, so I've combined the stages/jobs/steps for nugets and apps into this single pipeline. However, I can't seem to figure out a good way to only trigger the nuget tasks if the nuget path filters are satisfied and only the app tasks if the app path filters are satisfied.
I am hoping someone knows a way to do something similar to one of the following (or anything else that would solve the issue):
Have two different CI pipelines with their own sets of triggers and branch/path filters such that one or both might run on a given branch change
Set some variables based on which paths have changes so that I could later trigger the appropriate tasks using conditions
Make a pipeline always trigger, but only do tasks if a path filter is satisfied (so that the nuget build could always run, but not necessarily do anything, and then the app build could be triggered by the nuget build completing, and itself only do stuff if path filters are satisfied.
It seems we can only have one pipeline set up for CI
My issue was that this was an erroneous conclusion. It appeared to me that, out of the box, a Pipeline is created for a repo with a YAML file in it, and that you can change which file the Pipeline uses, but you can't add a list of files for it to use. I did not realize I could create an additional Pipeline in the UI, and then associate it to a different YAML file in the repo.
Basically, my inexperience with this topic is showing. To future people who might find this, note that you can create as many Pipelines in the UI as you want, and associate each one to a different YAML file.
I am migrating from Jenkins CI to gitlab CI. In jenkins I was able to parse some extra output files - for example a my_results.xml file which has some lines in XML which we could parse into a custom visualisation like the ones below:
Note: these are just example visualisations.
My XML might have some simple lines like:
<summary>
<warnings>10</warnings>
<errors>2</errors>
</summary>
This would be displayed over time into a graph like in the images above. Is it possible to write a custom parser / visualiser in gitalb CI?
There is no such thing in GitLab as Jenkins plugins.
But you could - for instance - develop a side application that interacts with GitLab through its APIs.
There, you'll be able to do whatever you want. For instance download Job artifacts, store them in a timeseries database, display them in a dashboard.
If the application is a pure web client, it could even be hosted in GitLab pages.
I am trying to use ARM-TTK for doing unit testing for my ARM templates and ensuring that the templates follow uniformity. I am only running few tests.
We are using Azure Repos as our VCS
I have incorporated this in my AZDO pipeline as a pre PR merge task which is in the form of a branch policy, so that before a PR is merged, these tests will run and validate all the templates that are pushed to the main branch.
But the problem is, the tests are returning false positives even though there are no issues with the JSON files.
According to this link ARM-TTK it seems there has to be one azuredeploy.json or maintemplate.json and all the other files are tested as linked templates.
I have JSON files with other names pertaining to the function of the template like win_vm_deploy.json, function_app-deploy.json etc etc.
It is not possible for me to have all the files as linked templates to the azuredeploy.json or maintemplate.json as mentioned in the URL.
I would also like to run the selected tests against the files loaded in the repo automatically and not specify a particular file to run the tests against.
So does that mean that in my situation i won't be able to use the ARM-TTK and utilize the unit tests?
What is the best way to check my templates in my particular folder and utilize some of the unit tests that i choose from ARM-TTK, but then i don't have to have a main template and the other templates as linked templates.
Appreciate any help
When we are working with several people to create a complex deployment it is recommended to use separate JSON files linked to an azureDeploy.json or a mainTemplate.json file. But it's not mandatory to do the same in every case.
To test one file in that folder, add the -File parameter. However, the folder must still have a main template named azuredeploy.json or maintemplate.json. In your case all files need to be specified in a script. There is no such shortcut available for automation.
You can customize the default test or even create your own test. You can implement you own set of rules by authoring the custom tests. A custom test needs to be placed in the correct directory:
_/arm-ttk/testcases/deploymentTemplate_
You can check this documentation for more information.
Also try this tasks for integration with Azure Pipeline.
I understand that gitlab has support to Jenkins CI, but what I need is a lot less than that.
I have a Rails application and get the coverage from the tests using simplecov. It generates HTML output in a directory by running a rake task. I would like to see the current coverage through gitlab. Is there a simple way to integrate this report with gitlab?
I fear there is still no easy way to integrate code coverage reports but Gitlab now supports (since Version 8.0 integrated) build jobs for your code. Unfortunately you have to implement your solution by writing a custom .gitlab-ci.yml to run your coverage tests. For viewing the reports, you can specify the generated "artifacts" or publish them on gitlab pages.
For more information, see here: https://about.gitlab.com/gitlab-ci/
Additionally you can parse a text output to display a short code coverage report:
(Enable builds and output test coverage)
Go to "Project Settings" -> Builds
Add to "Test coverage parsing" a regular expression (examples below, simplecov included)
See Publish Code Coverage Report with GitLab Pages
The short answer: Unfortunately there is no easy way to do this.
The longer answer:
GitLab not yet has a Jenkins support.
What you basically need is a service like GitLab CI or Jenkins CI, which starts simplecov and posts the output back to GitLab. Unfortunately GitLab does not offer such a functionality yet.
But I know other organizations which do have a Jenkins service for GitLab which automatically comment git pushes with the Jenkins result.
You now (June 2020, GitLab 13.1) have code coverage history, in addition of Test coverage parsing.
Graph code coverage changes over time for a project
All too often, a project has a code coverage target but development teams might not have much visibility into which direction that target value is trending over time.
There needs to be an easier way to track changes in code coverage over time without that extra hassle.
The Code Coverage graph now provides better visibility into how code coverage is trending over time.
It displays a simple graph of the coverage value(s) calculated in pipelines.
See Documentation and Issue
With GitLab 13.6 (November 2020), you also have (not for free though)
Display code coverage data for selected projects
In 13.4, we released the first iteration of Code Coverage data for Groups that enables you to compare the coverage of multiple projects and download the data in a single file from a single screen. However, to analyze the data, you had to open the file to check it manually, and probably imported it into a spreadsheet for further analysis.
In GitLab 13.6, you can now select specific projects in a group to see their latest coverage values directly in GitLab itself without needing to download a file or waste development time accessing code coverage data. We welcome feedback on the functionality and possible iterations for this feature in our feedback issue.
See Documentation and Issue.