API for gathering gitlab CI job execution statistics? - gitlab

I have a self-hosted gitlab enterprise server (11.2.3-ee), and run CI builds for C++ in docker images. There is a pool of runners on different physical hosts, and for the most part, any job can be scheduled on any runner. The physical hosts are not identical--some are faster than others.
The admin section provides a way to view statistics at a job level. The admin page on the server (at https://gitlab.example.com/admin/jobs) shows this, but I cannot find anything in the API. I've tried looking in the source but am not familiar enough with the organization to have found anything yet.
Is there API access for job stats across the entire instance (not just project level)? Or, am I constrained to scraping the admin screen?

Related

How would one create an isolated jenkins build node (without access to secrets)?

As the Top 10 CI/CD Security Risks SEC-04 states:
Ensure that pipelines running unreviewed code are executed on isolated nodes, not exposed to secrets and sensitive environments.
The above statement seems especially true when the code (or pipeline code itself) is in a pull request which has not yet been seen/approved/merged but from a developer perspective you want to know if it builds successfully in the first place. Running code that nobody has laid eyes upon while having access to build secrets is definitely a security risk.
Wondering if isolation is achievable with Jenkins build nodes as I cannot find any specific options for this.
My assumption is that dynamic provisioned containerized agents are best suited for isolated environments, I'm just not sure how to prevent their access to secrets from the Jenkins controller.

Why am I getting long docker build times in Azure pipeline?

I'm having issues on troubleshooting long build times for my ADO pipeline. Build steps are taking much longer than expected. We're using self-hosted agents - I suspect there may be a problem there, but I wanted input on any other directions I should take in investigating.
According to your description, do you mean that same pipeline run via microsoft hosted agent takes much shorter time?
It's suggested that you could check the pipeline debug logs from microsoft-hosted agent and self-hosted agent to see which part of the task and which step would perform different speed. And you could also share the logs with us for investigations.
And you could configure another self hosted agent on another local machine to test again. Usually, the time spent on same pipeline run through different agent would vary from agent performance
And for more information, pipeline run on self-hosted agent would be effected by multiple factors, like hardware property, network transmission and other concept.

Is it possible to configure GitLab.com's default shared runners?

I am facing an out of memory error in one of my CI/CD pipelines, so I would like to customise the configuration of my GitLab's shared runners, for example by using a config.toml file. I would also like to avoid self-hosting a GitLab Runner instance, if possible.
Is there a way of doing that ?
As far as I know, there is no way of changing the config.
However, according to this doc, I can pick from 3 machine sizes up to 16GB RAM, and add the appropriate tag at the job level in my gitlab-ci.yaml.
Note that this will impact the CI/CD minutes cost factor.
For GitLab Premium and Ultimate (os, not free), you do have GitLab 15.4 (September 2022), which comes with:
More powerful Linux machine types for GitLab SaaS runners
When you run jobs on GitLab SaaS Linux runners, you now have access to more powerful machine types: medium and large. With these two machine types, you have more choices for your GitLab SaaS CI/CD jobs. And with 100% job isolation on an ephemeral virtual machine, and security and autoscaling fully managed by GitLab, you can confidently run your critical CI/CD jobs on GitLab SaaS.
See Documentation and Issue.

How often does Gitlab Runner connect to gitlab.com?

I'm new to CI/CD.
I installed Gitlab Runner on my VPS for my project.
The first pipeline was successfully passed after the push to the master branch.
Questions:
Gitlab Runner does not listen to any port. Instead, it connects to the Gitlab server itself and checks if there are tasks for it. Is that true?
If so, how often does Gitlab Runner connect to gitlab.com?
Is it dangerous to keep Gitlab Runner and the project on the same VPS?
Gitlab Runner does not listen to any port. Instead, it connects to the Gitlab server itself and checks if there are tasks for it. Is that true?
Yes. It is a "pull" mechanism exclusively, as far as the jobs go. Though the runner may open ports for additional functionality, such as the session server.
If so, how often does Gitlab Runner connect to gitlab.com?
By default, every 3 seconds to check for new jobs (assuming the concurrency limit has not been reached). This can be configured through the check_interval configuration
Is it dangerous to keep Gitlab Runner and the project on the same VPS?
It can be, so this should be avoided where possible. The degree of danger depends somewhat on which executor you use and how you configure your server.
The file system of your GitLab server is extremely sensitive; it contains many critical secrets and operational components that should not be exposed to anyone (including CI jobs). Besides this, jobs would also have the potential to cause performance problems -- if a job uses too much memory/cpu/IO/etc., it could cause resource contention and harm/crash your other GitLab components on the server.
You might be able to get away with a single VPS if you deploy the server and runner as separate containers with a Docker or Helm deployment of GitLab and the runner on the VPS host; you can use separate logical Docker volumes and set resource constraints via Docker... but there is still some additional risk in such a scenario and configuration mistakes (which may be easier to make than one might think) can have serious consequences.

Azure web app deployment using vscode is faster than devops pipeline

Currently, I am working on Django based project which is deployed in the azure app service. While deploying into the azure app service there were two options, one via using DevOps and another via vscode plugin. Both the scenario is working fine, but strangle while deploying into app service via DevOps is slower than vscode deployment. Usually, via DevOps, it takes around 17-18 minutes whereas via vscode it takes less than 14 min.
Any reason behind this.
Assuming you're using Microsoft hosted build agents, the following statements are true:
With Microsoft-hosted agents, maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.
and
Parallel jobs represents the number of jobs you can run at the same time in your organization. If your organization has a single parallel job, you can run a single job at a time in your organization, with any additional concurrent jobs being queued until the first job completes. To run two jobs at the same time, you need two parallel jobs.
Microsoft provides a free tier of service by default in every organization that includes at least one parallel job. Depending on the number of concurrent pipelines you need to run, you might need more parallel jobs to use multiple Microsoft-hosted or self-hosted agents at the same time.
This first statement might cause an Azure Pipeline to be slower because it does not have any cached information about your project. If you're only talking about deploying, the pipeline first needs to download (and extract?) an artifact to be able to deploy it. If you're also building, it might need to bring in the entire source code and/or external packages before being able to build.
The second statement might make it slower because there might be less parallelization possible than on the local machine.
Next to these two possible reasons, the agents will most probably not have the specs of your development machine, causing them to run tasks slower than they can on your local machine.
You could look into hosting your own agents to eliminate these possible reasons.
Do self-hosted agents have any performance advantages over Microsoft-hosted agents?
In many cases, yes. Specifically:
If you use a self-hosted agent, you can run incremental builds. For example, if you define a pipeline that does not clean the repo and does not perform a clean build, your builds will typically run faster. When you use a Microsoft-hosted agent, you don't get these benefits because the agent is destroyed after the build or release pipeline is completed.
A Microsoft-hosted agent can take longer to start your build. While it often takes just a few seconds for your job to be assigned to a Microsoft-hosted agent, it can sometimes take several minutes for an agent to be allocated depending on the load on our system.
More information: Azure Pipelines Agents
When you deploy via DevOps pipeline. you will go through a lot more steps. See below:
Process the pipeline-->Request Agents(wait for an available agent to be allocated to run the jobs)-->Downloads all the tasks needed to run the job-->Run each step in the job(Download source code, restore, build, publish, deploy,etc.).
If you deploy the project in the release pipeline. Above process will need to be repeated again in the release pipeline.
You can check the document Pipeline run sequence for more information.
However, when you deploy via vscode plugin. Your project will get restored, built on your local machine, and then it will be deployed to azure web app directly from your local machine. So we can see deploying via vscode plugin is faster, since much less steps are needed.

Resources