In Bitbucket Pipelines, can manual pipelines present descriptions in the web interface? - bitbucket-pipelines

In Bitbucket Pipelines, for manually run pipelines (i.e. "custom pipelines") where you use the web UI to set variables, is it possible to insert any documentation? For example, so that the UI presents a description above or alongside the input form for a variable? (Or are you limited only to being able to name the pipeline and optionally give the variables each a default value and a set of allowed values?)
I don't want other users of the same pipeline (from the web UI) to misinterpret what a keyword expects, or indeed what the pipeline will do, and doubt they will always refer the source code itself to find comments.

Not that I know.
Documentation only describes a name, default and allowed-values attributes https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/#variables
But I think the culprit of your issue boils down to a general programming problem: adequately naming variables. Comments making up for bad variable names is among the top-10 programming bad practices.
Variable names should always be unambiguous and self-explanatory for anyone reading it.

Related

Is there any way to number the scenarios and it's steps?

As we write feature file which contains several scenarios, which contains several closely worded, closely meaning step definitions, I am thinking of numbering them. Like if a step 3 of a scenario 2 would be named as s23. I tried doing it like...
Scenario: This is my scenario
Given S21the user has some thing
When S22the user does some thing
Then S23we can make sure some thing is anything.
This is supposedly help me identify the corresponding stepdefinition implementation methods quickly, and console log message linked to the step definitions etc.
But this resulted in the numbering S21,S22, S23 etc., getting treated as integer arguments in the auto generated step definitions. How can avoid that ?
Cucumber is a communication tool, not a test scripting language. Would your business users be able to understand this notation? Would it help them make sense of the scenarios? This kind of approach defeats the purpose of Cucumber as a living documentation and communication tool, and should be avoided. If your step definitions are ambiguous, add some more context (in business readable language) to make them less so.
Your IDE should help you step between Gherkin scenarios and the source code; you shouldn't need to have to add extra information in the scenarios for this.
You also don't need to use the auto generated step definitions - they are just there as a convenience and you can write your own.

How can I split up resource creation into different modules with Terraform?

I would like to split up resource creation using different modules, but I have been unable to figure out how.
For this example I want to create a web app (I'm using the azurerm provider). I've done so by adding the following to a module:
resource "azurerm_app_service" "test" {
name = "${var.app_service_name}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
app_service_plan_id = "${var.app_service_plan_id}"
app_settings {
SOME_SETTING = "${var.app_setting}"
}
}
It works, but I'd like to have a separate module for applying the app settings, so:
Module 1 creates the web app
Module 2 applies the app settings
Potential module 3 applies something else to the web app
Is this possible? I tried splitting it up (having two modules defining the web app, but only one of them containing the app settings), but I get an error stating that the web app already exists, so it doesn't seem to understand that I try to manipulate the same resource.
Details on how it's going to be used:
I am going to provide a UI for the end user on which he/she can choose a stack of resources needed and tick a range of options desired for that person's project, along with filling out required parameters for the infrastructure.
Once done and submitted the parameters are applied to a Terraform template. It is not feasible to have a template for each permutation of options, so it will have to include different modules depending of the chosen options.
For example: if a user ticks web app, Cosmos DB and application insights the Terraform template will include these modules (using the count trick to create conditions). In this example I'll need to pass the instrumentation key from application insights to the web app's application settings and this is where my issue is.
If the user didn't choose application insights I don't want a setting for the web app and that it why I need to gradually build up a Terraform resource. Also, depending on the type of database the user chose, different settings will be added to the web app's settings.
So my idea is to create a module to apply certain application settings. I don't know if this is possible (or a better way exists), hence my question.
The best way for you to do this would be to wrap terraform with a bash script or whatever scripting language you want (python). Then create a template in bash or python (jinja2) to generate the resource with whatever options the customer selected for the settings, run the template to generate your terraform code, and then apply it.
I've done this with S3 buckets quite a bit. In terraform 0.12, you can generate templates in terraform.
I know this is 5 months old but I think that part of the situation here is that the way you describe splitting it up does not exactly fit how modules are intended to be used. For one thing, you cannot "build up a resource" dynamically, as Terraform is by design declarative. You can only define a resource and then dynamically provide it's predefined inputs (including it's count to activate/deactivate). Secondly, modules are in no way necessary to turn on and off portions of the stack via configuration. Modules simply are a way of grouping sets of resources together for containment and reuse. Any dynamism available to you with a stack consisting of a complex hierarchy of modules would be equally available with essentially the same code all in a giant monolithic blob. The monolith would just be a mess, is the problem with that, and you wouldn't be able to use pieces of it for other things. Finally, using a module to provide settings is not really what they are for. Yes, theoretically you could create a module with a "null_data_source" and then use it purely as some kind of shim to provide settings, but likely this would be a convoluted unnecessary approach to something done better done by simply providing a variable the way you have already shown.
You probably will need to wrap this in some kind of bash (etc) script at the top level as mentioned in other answers and this is not a limitation of terraform. For example once you have your modules, you would want to keep all currently applied stacks (for each customer or whatever) in some kind of composition repository. How will you create the composition stacks for those customers after they fill out the setup form? You will have to do that with some file creation automation that is not what Terraform is there for. Terraform is there to execute stacks that exist. It's not a limitation of Terraform that you have to create the .tf files with an external text editor to begin with, and it's not a limitation that in a situation like this you would use some external scripting to dynamically create the composition stacks for the customer, it's just part of the way you would use automation to get things ready for Terraform to do it's job of applying the stacks.
So, you cannot avoid this external script tooling, and you would probably use it to create the folders for the customer specific composition stacks (that refer to your modules), populate the folders with default files, and to create a .tfvars file based on the customers input from the form. Then you could go about this multiple ways:
You have the .tfvars file be the only difference between customer composition stacks. Whatever modules you do or don't want to use would be activated/deactivated by the "count trick" you mention, given variables from .tfvars. This way has the advantage of being easy to reason about as all customer composition stacks are the same thing just configured differently.
You could have the tooling actually insert the module definitions that you want into the files of the composition stacks. This will create more concise composition stacks, with fewer janky "count tricks" but every customer's stack would be it's own weird snowflake.
As far as splitting up the modules, be aware that there is a whole art/science/philosophy about this. I refer you to this resource on IAC design patterns and this talk on the subject (relevant bit starts at 19:15). Those are general context on the subject.
For your particular case, you would basically want put all the the smallest divisible functional chunks (that might be turned on/off) into their own modules, each to be referenced by higher level consuming modules. You mention it not being feasible to have a module for every permuation, again that is thinking about it wrong. You would aim for something that would be more of a tree of modules and combinations. At the top level you would have (bash etc) tooling that creates the new customers composition folder, their .tfvars file, and drop in the same composition stack that will be the top of the "module-tree". Each module that represents an optional part of the customers stack will take a count. Those modules will either have functional resources inside or will be intermediate modules, themselves instantiating a configurable set of alternative sub modules containing resources.
But it will be up to you to sit down and think through the design of a "decision tree", implemented as a hierarchy of modules, which covers all the permutations that are not feasible to create separate monolithic stacks for.
TFv12 dynamic nested blocks would help you specifically with the one aspect of having or not having a declared block like app_settings. Since you cannot "build up a resource" dynamically, the only alternative in a case like that would be to have an intermediate module that declares the resource in multiple ways (with and without the app_settings block) and one will be chosen via the "count trick" or other input configuration. That sort of thing is just not necessary now that dynamic blocks exist.

How to validate text box in gherkin language?

I need to validate "text box" which does not allow special character,Number more than 10000, and letters,
So my question is how to write in gerkin language ?
Gherkin is not a programming language to have validations. You cannot inject variables into it. However, you can perform the validation in the step definition file and tag it to gherkin.
Scenario: I verify if the characters more than 100
Given I see the text box
And I verify, the text box does not contain characters more than ""
The step definition file
arg !< characters.length
arg is the argument you pass inside the double quotes in gherkin.
Your task is to
Find the value in the text box
The way to do this varies from environment to environment, Selenium may be a good way to interact with your system if it is a web application
Save it in some variable
Validate it against some known value in a step
I have written a few blog posts on how to use Cucumber. This blog post from 2015 may be a reasonable start. The cucumber version is a bit outdated. The process of implementing steps is still valid.
Executable specifications (whether they are in Gherkin format or not) are meant to describe the behavior of the business people. I am pretty confident that not a single business person would talk about the how a single text box should behave.
My advice is to see what the actual business value is about and write the scenario from that perspective. Then the actual testing on this particular text box might not be described in the scenario, but it can be part of the underlying steps implementation.
In other words should the text box suddenly allow for numbers up to a million than the business value probably doesn't change. Therefore the scenario should not change, but the test code behind it might.

Dimensioning Family Instances.

I would like to Dimension the family instances. What is the reference I should be provided while creating dimension.
I have tried with several approaches but the dimensions are not getting deleted when I delete the family instances. This means that, the dimensions are not getting attached to the family instances.
Please help.
Have you found a way to address this manually through the user interface? That is mostly the best place to start when tackling a Revit API task. If you can solve it through the UI, the chances are good it can also be automated. If no UI solution is found, automation is mostly impossible as well.
I would analyse the exact differences caused in the Revit database on the elements involved and their parameters by executing the manual modification. Once you have discovered exactly what is changed by the manual UI interaction, you can probably replicate the same changes programmatically through the API. Here is a more exhaustive description of how to address a Revit API programming task:
http://thebuildingcoder.typepad.com/blog/2017/01/virtues-of-reproduction-research-mep-settings-ontology.html#3

run puppet resource types only when file has changed

I feel like this is a very common scenario but yet I haven't found a solution that meets my needs. I want to create something that works like below. (Unfortunately set_variable_only_if_file_was_changed doesn't exist)
file { 'file1':
path => "fo/file1",
ensure => present,
source => "puppet:///modules/bar/file1",
set_variable_only_if_file_was_changed => $file_change = true
}
if $file_change{
file{'new file1'
...
}
exec{'do something1'
...
}
package{'my package'
...
}
}
I have seen how exec can be suppressed using the refreshonly => true but with files it seems harder. I would like to avoid putting code into a .sh file and execute it through a exec.
Puppet does not have the feature you describe. Resources that respond to refresh events do so in a manner appropriate to their type; there is no mechanism to define custom refresh behavior for existing resources. There is certainly no way to direct which resources will be included in a node's catalog based on the results of applying other resources -- as your pseudocode seems to suggest -- because the catalog is completely built before anything in it is applied.
Furthermore, that is not nearly as constraining as you might suppose if you're coming to Puppet with a substantial background in scripting, as many people do. People who think in terms of scripting need to make a mental adjustment to Puppet, because Puppet's DSL is definitely not a scripting language. Rather, it is a language for describing the construction of a machine state description (a catalog). It is important to grasp that such catalogs describe details of the machine's target state, not the steps required to take it from its current state to some other state. (Even Execs, a possible exception, are most effective when used in that way.) It is very rare to be unable to determine a machine's intended state from its identity and initial state. It is uncommon even for it to be difficult to do so.
How you might achieve the specific configuration you want depends on the details. Often, the answer lies in user-defined facts. For example, instead of capturing whether a given file was changed during catalog application, you provide a fact that you can interrogate during catalog building to evaluate whether that file will be changed, and adjust the catalog details appropriately. Sometimes you don't need such tests in the first place, because Puppet takes action on a given resource only when changes to it are needed.

Resources