I would like to set my crontab in a convergent way using Chef. That is, I'd like to specify a list of cronjobs in my cookbook, and have Chef modify my crontab so that it includes only those entries, creating and deleting lines from the crontab as necessary.
How can I do this?
The built-in cron resource doesn't seem fit for the task; its resources are individual cron jobs, and take either a :create or :delete action; I can't have it automatically remove entries from the crontab when I remove them from my cookbook unless I explicitly include a :delete action, and I don't want to have to list :delete actions for every crontab I've removed from my cookbook throughout history.
The cron cookbook from the Chef Supermarket seems unlikely to solve this problem either, since it claims to support the same interface as the built-in cron resource.
This is not explicitly named, but there are two general schools of thought in Chef resource design: "managed resources" vs. "managed collections". With a managed collection, you are convergently defining the entire state of the collection rather than a single object within it. This collection approach seems to be the one you are looking for, but it is generally avoided by the Chef community (and all core code) because it is extremely error-prone. There are a lot of reasons an object might not be visible within the Chef run (partial runs, composite runs, etc) and as the saying goes "absence of evidence is not evidence of absence". That said, some users (Facebook) have used the collections pattern to great effect thanks to heavy code review and training about the pitfalls. Check out the https://github.com/nvwls/zap cookbook for an implementation of a zap_crontab resource that might fit your needs.
Related
In Bitbucket Pipelines, for manually run pipelines (i.e. "custom pipelines") where you use the web UI to set variables, is it possible to insert any documentation? For example, so that the UI presents a description above or alongside the input form for a variable? (Or are you limited only to being able to name the pipeline and optionally give the variables each a default value and a set of allowed values?)
I don't want other users of the same pipeline (from the web UI) to misinterpret what a keyword expects, or indeed what the pipeline will do, and doubt they will always refer the source code itself to find comments.
Not that I know.
Documentation only describes a name, default and allowed-values attributes https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/#variables
But I think the culprit of your issue boils down to a general programming problem: adequately naming variables. Comments making up for bad variable names is among the top-10 programming bad practices.
Variable names should always be unambiguous and self-explanatory for anyone reading it.
I have an issue on gitlab, #1. This issue when it was created it contained two tasks.
Is there a way to mention/reference/close one of the two tasks in a git commit ?
Not yet, but at least you can define tasks in an issue now.
This comes with GitLab 15.3 (Aug. 2022)
Create tasks in issues
Tasks provide a robust way to refine an issue into smaller, discrete work units.
Previously in GitLab, you could break down an issue into smaller parts using markdown checklists within the description.
However, these checklist items could not be easily assigned, labeled, or managed anywhere outside of the description field.
You can now create tasks within issues from the Child Items widget.
Then, you can open the task directly within the issue to quickly update the title, set the weight, or add a description.
Tasks break down work within projects for GitLab Free and increase the planning hierarchy for our GitLab Premium customers to three levels (epic, issue, and task).
In our next iteration, you will be able to add labels, milestones, and iterations to each task.
Tasks represent our first step toward evolving issues, epics, incidents, requirements, and test cases to work items.
If you have feedback or suggestions about tasks, please comment on this issue.
See Documentation and Epic.
I am reading a lot about Gherkin, and I had already read that it was not good to repeat steps, and for this it is necessary to use the keyword "Background", but in the example of this page they are repeating the same "Given" again and again, Could it be that I am doing wrong? I need to know your opinion about it:
Like with several things, this a topic that will generate different opinions. On this particular example I would have moved the "Given that I select the post" to the Background section as this seems to be a pre-requisite to all scenarios on this feature. Of course this would leave the scenarios in the feature without an actual Given section but those would be incorporated from the Background section on execution.
I have also seen cases where sometimes the decision of moving steps to the Background is a trade-off between having more or less feature files and how these are structured. For example, if there are 10 scenarios for a particular feature with a lot of similar steps between them - but there are 1 or 2 scenarios which do not require a particular step, then those 1 or 2 scenarios would have to moved into a new feature file in order to have the exact same steps on the Background section of the original feature.
Of course it is correct to keep the scenarios like this. From a tester's perspective, the Scenarios/Test cases should run independently, therefore, you can keep these tests separately for each functionality.
But in case you are doing an integration testing, then some of these test cases can be merged, thus you can cover multiple test cases in one scenario.
And the "given" statement is repeating, therefore you can put that in the background, so you don't have to call it in each scenarios.
Note: These separate scenarios will be handy when you run the scripts separately with annotation tags, when you just have to check for a specific functionality, or a bug fix.
I feel like this is a very common scenario but yet I haven't found a solution that meets my needs. I want to create something that works like below. (Unfortunately set_variable_only_if_file_was_changed doesn't exist)
file { 'file1':
path => "fo/file1",
ensure => present,
source => "puppet:///modules/bar/file1",
set_variable_only_if_file_was_changed => $file_change = true
}
if $file_change{
file{'new file1'
...
}
exec{'do something1'
...
}
package{'my package'
...
}
}
I have seen how exec can be suppressed using the refreshonly => true but with files it seems harder. I would like to avoid putting code into a .sh file and execute it through a exec.
Puppet does not have the feature you describe. Resources that respond to refresh events do so in a manner appropriate to their type; there is no mechanism to define custom refresh behavior for existing resources. There is certainly no way to direct which resources will be included in a node's catalog based on the results of applying other resources -- as your pseudocode seems to suggest -- because the catalog is completely built before anything in it is applied.
Furthermore, that is not nearly as constraining as you might suppose if you're coming to Puppet with a substantial background in scripting, as many people do. People who think in terms of scripting need to make a mental adjustment to Puppet, because Puppet's DSL is definitely not a scripting language. Rather, it is a language for describing the construction of a machine state description (a catalog). It is important to grasp that such catalogs describe details of the machine's target state, not the steps required to take it from its current state to some other state. (Even Execs, a possible exception, are most effective when used in that way.) It is very rare to be unable to determine a machine's intended state from its identity and initial state. It is uncommon even for it to be difficult to do so.
How you might achieve the specific configuration you want depends on the details. Often, the answer lies in user-defined facts. For example, instead of capturing whether a given file was changed during catalog application, you provide a fact that you can interrogate during catalog building to evaluate whether that file will be changed, and adjust the catalog details appropriately. Sometimes you don't need such tests in the first place, because Puppet takes action on a given resource only when changes to it are needed.
I'd like to have my testers be able to organize their Given (or When or Then) steps in any order. This means the Given steps will be accumulating actions to take (database insertions, page visits, etc). Before the When steps execute, I'd like to execute the accumulation of actions to take from the Given steps. Is there a hook to do that?
I don't know of a hook to achieve what you want, but I believe that the problem is that you are not cuking your scenarios properly.
It sounds as though you (it would've helped if you'd included an example scenario!) are writing imperative instead of declarative scenarios. See here for examples of an imperative and declarative scenarios.
Also scenarios should be written in a technology-agnostic way so that anyone in the business can understand them, hence you should not include steps which detail "database insertion" actions.
If you were to write your scenario in a declarative fashion (i.e. detailing what action you want to execute without detailing exactly how that action will be executed) then there would be no need to execute an "accumulation of actions".
Another benefit of declarative scenarios is that they are more explicit in stating what the scenario is trying to achieve e.g With the following:
When I enter "email#domain.com" in "email"
And I enter "password1" in "password"
And I tap "login"
A reader has to deduce what the purpose of these steps are, whereas with:
Given I login using valid credentials
It's clear what the steps intent is.