Inbuilt feature of puppet Enterprose - puppet

Is there any built in feature of Puppet Enterprise that we can use to have a push based mechanism, Apart from Mcollective and Bolt.

Assuming by 'push based' you mean triggering Puppet Agent runs, you can trigger a run in the PE console directly, or (as you mentioned) use Bolt in the PE console to run Puppet as part of a 'plan' or 'task'

Related

Installing compilers on Azure Pipelines

I have a C project that I'd like to be tested on multiple different C compilers. I'm currently testing it using Azure Pipelines, but I'm not sure what the best way to add more compilers to my workflow.
Currently, I just use a script to sudo apt install a few other things I need for testing, but Azure warns me not to do this. I also run into a problem where the latest version of TCC isn't available through apt install, so I currently can't test that through my current method.
Is there a proper way to do this? I'm thinking maybe specify a VM for Azure to use, onto which I've already installed whatever software I need. I have no idea if this is possible or how to do it though. Looking through the Azure Pipelines documentation hasn't been very helpful either since I don't know what I'm looking for.
(Please let me know if anything is not clear, I'm not 100% sure of the proper terminology surrounding this.)
EDIT: I basically want to be able to add something like this to my azure-pipelines.yml:
- job:
displayName: "C TCC Ubuntu"
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
set -e
cmake -DCMAKE_C_COMPILER=tcc .
make
displayName: "Compile"
- script:
./tests
displayName: "Run Tests"
except with the vmImage being a custom one onto which I've already installed tcc. In case that's not possible, any other sort of work-around is also appreciated.
Azure DevOps pipelines has two models for agents, self-hosted or hosted. You could run a self-hosted agent that you preinstall your tool chain. That brings without management of that server and the cost of it sitting idle. To do self-hosted here are the docs that walk you through the installation.
I would encourage you to use the hosted agents as it gives you the most flexibility and doesn't limit you to just one operating system to execute your build against if you so desire. With that said, the common pattern with the hosted agents are to install your tools in a task like you have said you are doing. The Azure DevOps Extension marketplace has several examples of people creating extensions to install tools. Here is an example for Rust, notice the installer screenshot.
If you don't want to incur the penalty of installing your compiler on every build, you could also leverage the ability of the hosted agents to use a container to build your software. You could then prebuild a container image that has your compiler and other tools installed and instruct Azure DevOps to use that in the hosted agent to do your compilation. Here is that documentation.

Is it possible to script the flow/stages/steps in Azure Pipelines?

I'm trying to setup Azure Pipelines for a CI setup and I'm using the YAML syntax to get started. However, I was wondering if it is possible to script the flow at "runtime"? Like you can do in Jenkins script: spawn builds etc.
Depending on the commit I want to have a vastly different flow.
This is because I currently have a mono-repo setup with Conan libraries and I want to rebuild the libraries that are necessary depending on the commit, thus the build-flow is not the same for each commit. I want to spawn jobs so I can take advantage of parallel building on several agents.
For your issue ,do you refer to trigger builds based on specified commits? If so, you can trigger builds by adding tag trigger in yaml. You can create tags on the commits. If the tag created meets the trigger condition of the tag trigger in yaml , then the build will be triggered.
trigger:
tags:
include:
- v2.*

automate adding capabilities to Azure DevOps self-hosted agents

As far as I know, Azure DevOps agents are capable of automatically detecting its own capabilities. Based on the documentation, so as long as I restart the host once a new piece of software has been installed, the capability should be registered automatically.
What I am having trouble doing right now is getting the agent to detect the presence of Yarn on a self-hosted agent on a windows host. Looking at the PATH environment variable shows the existence of the Yarn executable, but it is not listed as a capability despite having the host restarted. My current workaround is to manually add Yarn to the capability list and setting its value to true.
As a side note, yarn was installed via Ansible using win_chocolatey plugin. The install was successful with no errors.
I am wondering a few things
1) Am I missing something which is causing this issue?
2) Is this an inherent issue with Yarn? If this is an inherent issue with Yarn, is there a way to automate the process of manually adding yarn as a capability?
Capabilities for a windows agent come from the environmental variables.
if you want to set a value you add a line that adds that adds an entry to the machine.
[System.Environment]::SetEnvironmentVariable("CAPABILITYNAME", "value", "Machine")
when you start the service it then picks this up.
I am currently trying to do something similar for a set of linux agents...
The interesting thing about capabilities is that they are not paths. for example it might show you have msbuild for 2019 and 2017, but I have not been able to use those as pipeline variables.

VSTS CI/CD definition as scripts

We are building a Microservices based architecture and we are having 50 odd CI and 50 odd CD pipelines. Is there a way to script the CI / CD Build and Release definitions? We want this to be a repeatable process and do not want to leave it to our DevOps engineer(s) as it is prone to errors. Please note that I am not talking about ARM (which is already being used by us). Is there a way to do the above?
For builds, you can use YAML builds, which are currently in preview.
For releases, there's nothing equivalent yet.
You could always use the REST APIs to extract the build and release definitions as JSON, source control them, and then create a continuous delivery pipeline to update them when the definitions in source control change.

configure gitlab to build the source code on another machine

We have two servers in our organisation.
1) server with gitlab
2) Build server
I would like to create an automate build happen in the second machine(Build server ) for the source code in the gitlab server.
How can I achieve this using gitlab ?
Thanks,
siva
If you are moving from an "pull" continuous integration system (e.g. using a kind of crontab that regularly checks if the source code on the versioning system has changed and start the configure/build/test/deploy stages if it has), then know that gitlab has a much better way of doing this.
gitlab approach is to configure a "pull" system: every time the code is updated (in any branch) on the git repository then the script defined in your .gitlab-ci.yml is read to see if continuous integration jobs have to be launched. jobs are send to your configured gitlab runners. gitlab runners are defined on your build server(s) and takes the job when they are coming.
Definition of what to do is also describes in the .gitlab-ci.yml.
Here is a list of documentation to start learning about gitlab CI:
the official documentation can be helpful
A general introduction to gitlab ci using docker can be found in this blog article (the first slides are great). If your build server or your intended build is on Linux, I would recommend using the "docker executor" (e.g. gitlab runners are executed inside a docker machine inside your build server). It is easy and quick to setup.
Hope this helps you starting...

Resources