Puppet Run Stages with sub modules - puppet

I am learning Puppet and have taken their "Getting Started with Puppet" class but it did not cover Run Stages, and their documentation on Run Stages is thin.
I need to make sure that two things happen before anything else that Puppet does. I have been advised by the instructor of my "Getting Started with Puppet" class to look at Run Stages.
In my investigation of Run Stages, I have learned that the puppetlabs-stdlib class sets up some "standard" Run Stages. One of them being "setup". As shown in the snippet below I have implemented the stage => 'setup' as per https://puppet.com/docs/puppet/5.5/lang_run_stages.html. However, I am getting errors from Puppet:
root#server:~# puppet agent -t
Info: Using configured environment 'dev_branch'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER:
Server Error: Evaluation Error: Error while evaluating a Resource Statement, Could not find stage setup specified by
Class[Vpn::Roles::Vpn::Client] (file:
/etc/puppetlabs/code/environments/wal_prod1910_dev/modules/bh/manifests/roles/snaplocker.pp, line: 5, column: 3) on node server
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Looking at the error message and the Puppet documentation, I have added quotations around the various string values and replaced my initial -> with the correct =>, but I still get the same error.
class bh::roles::snaplocker()
{
# stage => setup takes advantage of the setup run stage introduced by
# puppetlabs-stdlib which is pulled in by puppet-control-bh/Puppetfile
class { 'vpn::roles::vpn::client': stage => 'setup' }
class { 'bh::profiles::move_archives': stage => 'setup' }
#...
}
Looking more closely at the error message, I believe that the cause is that puppetlabs-stdlib id introduced by the Puppetfile in the class that calls the module that I am working on. I have been deliberately avoiding trying to pull in puppetlabs-stdlib in the class I am working on to avoid duplication. But apparently I need it... The module I am working on does not have a Puppetfile do I need to somehow include puppetlabs-stdlib in my sub module? If so how should I do that? If not, how to I tell my sub module to use the instance declared in the parent module's Puppetfile?

Usually, you don't need any stage if you have correct classes/resources dependencies.
From the "Run stages" documentation:
CAUTION: Due to these limitations, use stages with the simplest of classes, and only when absolutely necessary. A valid use case is mass dependencies like package repositories.
In your case, if you really want stages, you should add include stdlib::stages1 or explicitly add stage like stage { 'setup': }

Related

Conditionally Set Environment Azure DevOps

I am working with an Azure Pipeline in which I need to conditionally set the environment property. I am invoking the pipeline from a rest API call by passing Parameters in the body which is documented here.. When I try to access that parameter at compile time to set the environment conditionally though the variable is coming through as empty (assuming it is not accessible at compile time?)
Does anybody know a good way to solve for this via the pipeline or the API call?
After some digging I have found the answer to my question and I hope this helps someone else in the future.
As it turns out the Build REST API does support template parameters that can be used at compile time, the documentation just doesn't explicitly tell you. This is also supported in the Runs endpoint as well.
My payload for my request ended up looking like:
{
"Parameters": "{\"Env\":\"QA\"}",
"templateParameters": {"SkipApproval" : "Y"},
"Definition": {
"Id": 123
},
"SourceBranch": "main"
}
and my pipeline consumed those changes at compile time via the following (abbreviated version) of my pipeline
parameters:
- name: SkipApproval
default: ''
type: string
...
${{if eq(parameters.SkipApproval, 'Y')}}:
environment: NoApproval-All
${{if ne(parameters.SkipApproval, 'Y')}}:
environment: digitalCloud-qa
This is a common area of confusion for YAML pipelines. Run-time variables need to be accessed using a different syntax.
$[ variable ]
YAML pipelines go through several phases.
Compilation - This is where all of the YAML documents (templates, etc) comprising the final pipelines are compiled into a single document. Final values for parameters and variables using ${{}} syntax are inserted into the document.
Runtime - Run-time variables using the $[] syntax are plugged in.
Execution - The final pipeline is run by the agents.
This is a simplification, another explanation from Microsoft is a bit better:
First, expand templates and evaluate template expressions.
Next, evaluate dependencies at the stage level to pick the first stage(s) to run.
For each stage selected to run, two things happen:
All resources used in all jobs are gathered up and validated for authorization to run.
Evaluate dependencies at the job level to pick the first job(s) to run.
For each job selected to run, expand multi-configs (strategy: matrix or strategy: parallel in YAML) into multiple runtime jobs.
For each runtime job, evaluate conditions to decide whether that job is eligible to run.
Request an agent for each eligible runtime job.
...
This ordering helps answer a common question: why can't I use certain variables in my template parameters? Step 1, template expansion, operates solely on the text of the YAML document. Runtime variables don't exist during that step. After step 1, template parameters have been resolved and no longer exist.
[ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops]

Is additional context configuration required when upgrading cucumber-jvm from version 4 to version 6?

I am using cucumber-jvm to perform some functional tests in Kotlin.
I have the standard empty runner class:
#RunWith(Cucumber::class)
#CucumberOptions(features=[foo],
glue=[bar],
plugin=[baz],
strict=true,
monochrome=true)
class Whatever
The actual steps are defined in another class with the #ContextConfiguration springframework annotation.
This class also uses other spring features like #Autowire or #Qualifier
#ContextConfiguration(locations=["x/y/z/config.xml"])
class MyClass {
...
#Before
...
#Given("some feature file stuff")
...
// etc
}
This all work fine in cucumber version 4.2.0, however upgrading to version 6.3.0 breaks things. After updating the imports to match the new cucumber project layout the tests now fail with this error:
io.cucumber.core.backend.CucumberBackendException: Please annotate a glue class with some context configuration.
It provides examples of what it means...
For example:
#CucumberContextConfiguration
#SpringBootTest(classes = TestConfig.class)
public class CucumberSpringConfiguration {}
Or:
#CucumberContextConfiguration
#ContextConfiguration( ... )
public class CucumberSpringConfiguration {}
It looks like it's telling me I can just add #CucumberContextConfiguration to MyClass.
But why?
I get the point of #CucumberContextConfiguration, it's explained well here but why do I need it now with version 6 when version 4 got on fine without it? I can't see any feature that was deprecated and replaced by this.
Any help would be appreciated :)
Since the error matches exactly with the error I was getting in running Cucumber tests with Spring Boot, so I am sharing my fix.
One of the probable reason is: Cucumber can't find the CucumberSpringConfiguration
class in the glue path.
Solution 1:
Move the CucumberSpringConfiguration class inside the glue path (which in my case was inside the steps package).
Solution 2:
Add the CucumberSpringConfiguration package path in the glue path.
The below screenshot depicts my project structure.
As you can see that my CucumberSpringConfig class was under configurations package so it was throwing me the error when I tried to run feature file from command prompt (mvn clean test):
"Please annotate a glue class with some context configuration."
So I applied solution 2, i.e added the configurations package in the glue path in my runner class annotation.
And this is the screenshot of the contents of CucumberSpringConfiguration class:
Just an extra info:
To run tests from command prompt we need to include the below plugin in pom.xml
https://github.com/cucumber/cucumber-jvm/pull/1959 removed the context configuration auto-discovery. The author concluded that it hid user errors and removing it would provide more clarity and reduce complexity. It also listed the scenarios where the context configuration auto-discovery used to apply.
Note that it was introduced after https://github.com/cucumber/cucumber-jvm/pull/1911, which you had mentioned.
Had the same error but while running Cucumber tests from Jar with Gradle.
The solution was to add a rule to the jar task to merge all the files with the name "META-INF/services/io.cucumber.core.backend.BackendProviderService" (there could be multiple of them in different Cucumber libs - cucumber-java, cucumber-spring).
For Gradle it is:
shadowJar {
....
transform(AppendingTransformer) {
resource = 'META-INF/services/io.cucumber.core.backend.BackendProviderService'
}
}
For Maven something like this:
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/services/io.cucumber.core.backend.BackendProviderService</resource>
</transformer>
</transformers>
A bit more explanation could be found in this answer

Puppet exception handling?

I was wondering how one would do try/catch/throw type exception handling in a puppet manifest. Here's how I wish puppet would work ...
class simple {
unless ( package { 'simple': ensure => present } ) {
file { '/tmp/simple.txt':
content => template( 'simple/simple.erb' ),
}
}
}
Thanks
I don't think there is an exception handling in a programmatic way you would like in Puppet. If you declare a resource, it is expected that puppet brings your machine to that state (installed package) and if not, it will fail automatically.
One thing that you can do (and I don't recommend) and that is not "puppet way" is following:
Create custom facter (not custom function since it is executed on puppet master and you want this ruby code to be executed on puppet agent)
Since it is plain ruby code in facter, you can have exception handling and all programmatic things. You can install package as unix command from puppet code and have some logic which will, if not installed retrieve some value as fact
You would use this fact value and based on it you would determine if you want to create file or not
Also, if easier, you can write bash script which will do this logic and execute it from puppet using exec resource
Hope it helps.

How to get a Gradle detached configuration to use a resolution strategy from the dependency-lock plugin?

We are using the gradle-dependency-lock-plugin. A global.lock file is generated, containing a list of all the dependencies used by our project.
In one of our Gradle tasks, a detached configuration is created and used to resolve an artifact. What I noticed is that it resolves it to the latest version in Nexus, and not to the version in the global.lock file.
For example, global.lock contains some-library-10.0.0-ci.3 but the resolved artifact is at some-library-10.0.0-ci.5.
This appears to be a known problem with detached configurations - they don't use the resolution strategy - as discussed here.
I was reading the source code of the dependency-lock plugin, and it appears to set the resolutionStrategy for all configurations in DependencyLockPlugin.groovy's applyLock method:
project.configurations.all {
resolutionStrategy {
force lockForces.toArray()
}
}
I was hoping to set the resolution strategy of the detached configuration doing this:
def dep = dependencies.create( elastic( "$notation:$version" ) )
def detachedConf = configurations.detachedConfiguration( dep ).setTransitive( false )
detachedConf.resolutionStrategy {
configurations.all.resolutionStrategy
}
def resolvedArtifacts = detachedConf.resolvedConfiguration.resolvedArtifacts
assert resolvedArtifacts.size() == 1 : 'Only one artifact should be present'
def resolvedArtifact
resolvedArtifacts.each { resolvedArtifact = it }
However, Gradle complains with:
groovy.lang.MissingPropertyException: Could not find property 'resolutionStrategy' on configuration container.
I switched it to use configurations.default.resolutionStrategy and configurations.compile.resolutionStrategy, but in either case it continued to access the latest version from Nexus.
How do I properly set the resolution strategy of the detached configuration so that it uses the same resolution strategy as set by the dependency-lock plugin?
The correct solution entailed setting the forcedModules of the detached configuration's resolution strategy to one of the other configurations:
detachedConf.resolutionStrategy.forcedModules = configurations.install.resolutionStrategy.forcedModules
install is the configuration we use for accessing resolved artifacts, but it could be any of the other configurations since they all use the same resolution strategy.

drools linkage error with multiple threads

I have a junit test case which creating two threads to run a application. In this application, there is a method updateStatus which used for fire drools rules. And in this rules file, I have some functions like next :
function Object updateItem() {
.....
}
function boolean isNullOrEmpty(Object obj) {
...
}
function Object getValueFromFact(Object obj) {
....
}
I restart my tomcat, run this unit test, but one thread failed, the error was :
org.apache.cxf.interceptor.Fault: loader (instance of org/drools/rule/JavaDialectRuntimeData$PackageClassLoader): attempted duplicate class definition for name: "com/icil/sofs/booking/rules/GetValueFromFact"
'getValueFromFact' is a function which defined in rules file.
then run it again without restart tomcat, there is no error. then run third time without restart tomcat, there is no error too.
after tried, i found that the 'duplicate class' error only happened at the first time running after restarting tomcat.
i also found that at the first time, 2 threads are execute 'knowldegeSession.execute()' at the "same time", but 2 threads are run 'knowledgeSession.execute()' in sequence at 2nd time and 3rd time.
so why this error 'duplicate class definition' always happened at the first time running after restart tomcat?
and why the error is the function 'getValueFromFact'(this is the 3rd function in rules file) but not the first function 'updateItem'(this is the first function in rules file)?
thanks in advance!
Because there are multiple threads are running 'execute()', and they are loading functions at the 'same time', so this error thrown out.
When calling 'addKnowledgePackages()', drools will load rules, but the functions may not be loaded at this time, they may be loaded when calling 'execute()'.
After debugging, i found that drools will load functions if rules conditions use mvel expression instead of java expression when calling 'addKnowldegePackages()'.
I am using drools 5.5.0.Final, I cannot upgrade drools to 6.0 at present because 6.0 has big difference. But I have to figure out a solution as this problem need to be fixed as soon as possible :
Make your code 'addKnowledgePackages()' can be run when tomcat start, e.g. put it in a static block, as well as change your rules to use mvel expression rather than java expression.

Resources