I've been trying to find the answer of this question regarding the equivalent of Terraform modules in Pulumi, the closest answer is this link to this blog. Bear in mind that I'm a beginner for using Pulumi too.
With Terraform, you can create a git repository containing all of your modules, version it and pull it into various other git repositories by using source = "git#github.com:xyz". Terraform allows you also to turn On and Off resources based on conditionals such as regions, account number or environments for instance (with count method for modules and resources).
Apparently Pulumi does not have this concept, looks like you need to duplicate your code in each repository or create a giant monolith repository containing all of your code. I'm also wondering what's the best practice for feature flags, turning On and Off resources for each of your specific stacks, what sort of conditionals you'll be using for this.
Thanks again for your highlights!
Broadly, you should create libraries in your language of choice and put reusable functions, classes, and components in there. For example, if you use TypeScript, you could create an NPM module (public or private) with any code that you want to reuse across projects and teams.
More specifically, if you are looking for a way to combine multiple resources into a reusable higher-level abstraction, you can implement it as a Component Resource. A component would accept inputs, instantiate a few resources in its constructor, and return output values. You can still package a component (or multiple components) as a reusable library.
Pulumi also allows you to create multi-language components, where you would implement it in one language but then publish it in all the supported languages for everyone to use. You can ship a multi-language component as a package in Pulumi registry to simplify discovery and installation. Read more in Pulumi Packages and multi-language Components and see other component in the Registry.
I wrote an article about implementing feature flags with pulumi here. In summary you should use the Pulumi configuration system. You can store configuration values in the stack settings file, which are automatically named Pulumi..yaml. You should split resources into their own python modules which you can then optionally import based on the settings in the config files.
Related
I want to create CI on Github Action for QA Automation. But there is multiple language are use to install dependecies. Can i use NodeJS and Golang at the same file?
I read the documentation of Github Action, but there is configuration for each language not both. Any reference or idea i can use?
In short, you write a manifest file (in YAML) and tell GitHub Actions build agent(s) to execute the commands you wanted in an automatic way. See, there is nothing there bind to a single programming language.
You see per language samples/tutorials, simply because that's how new users/developers to get started with a CI/CD system, and it is easy to write up the necessary steps if focusing on the ecosystem of a single programming language.
The underlying GitHub Actions build machines (if managed by GitHub), however, have almost everything pre-installed, so of course you can use Node.js and Golang tools in the same manifest and you don't need any specific reference.
Open the image pages and learn what tools are preinstalled if you like.
Try it out by combining multiple manifests into the single one, and you will see how it works out.
We have multiple aws modules in git and when we use a module in other project we specify the path of the module in git as a source like this:
module "module_name" {
source = "git::https://gitlab_domain_name.com/terraform/modules/aws/module_name.git?ref=v1.0.0"
...
}
I want to know if there is a benefit to use a terraform private registry to store our modules like for instance when developing in Java we use a repository to store JAR packaged or also when working with docker images.
Yes, there are benefits to a private registry. Namely you can put some description, documentation, examples there and you get a better visual representation of what the module does, its inputs, outputs and resources.
But apart from that in terms of functionality of the module it behaves the same way. A java registry for example (e.g. nexus) makes sense because you do not want to force everyone to build the libs themselves (maybe they can't at all) and therefore having a place where pre-built libraries are stored does make sense. That reasoning does not apply to terraform since there is nothing being compiled.
Whole different story for custom providers, in that case you need a private registry to provide the compiled golang binaries, but you can write one yourself without too much effort (since terraform 0.13), it is just a http rest api.
recently I found the terraspace framework which is wonderful. I followed the tutorial but now I have a concern in how to work with public modules, for example, I want to create a gcp compute using this module: https://registry.terraform.io/modules/terraform-google-modules/vm/google/4.0.0
Terraspace uses Terraform HCL, so you can use the module source keyword to include 3rd party modules, public or private. The #T.H. link provides a good intro using the module source keyword.
Additionally, Terraspace provides a Terrafile concept that centralizes and automates the management of external modules. You can use any module you want, private or public. Introduction blog post: Terraspace Terrafile: Using Git and Terraform Registry Modules
It also has several examples showing how to use modules with Terraspace, including terraspace-google-vm
Inside a custom endpoint in Kinvey, I see the modules parameter which exposes inbuilt modules like so:
function onRequest(request, response, modules) {
}
I could see from the documentation here that Kinvey has some existing inbuilt functions
http://devcenter.kinvey.com/rest/reference/business-logic/reference.html#modules
My questions are,
Is it possible to have our own custom reusable modules defined somewhere in Kinvey and use it within the custom endpoint function above? If so how?
Is it possible to define (similar to package.json) and use external npm packages within the above custom endpoint function?
Great to see that you show interest in using Kinvey!
Regarding your questions - yes, if I got you correctly both are possible. See below for further explanations...
You can implement Common Code, and use it to create reusable functions which can be used across your business logic scripts. Please refer to the following link for more information.
You can implement Kinvey Flex Services, which are low code, lightweight NodeJS micro-services that are used for data integrations and functional business logic. FlexServices utilize the Flex SDK and can consist of FlexData for data integrations, FlexFunctions for trigger-based data pre/post hooks or custom endpoints, and FlexAuth for custom authentication through Mobile Identity Connect (MIC). Please refer to the following link for more information.
I hope, I have informed you well.
No, this is not possible in the free tier, in Business Logic you are limited to using the modules that are explicitly whitelisted.
There are options to run any node code (including any npm module you want) inside the platform in the paid "Business Edition".
I want to be able to build 30+ packages in SSIS and be able to test/develop them in isolation. I also want to be able to run these from a Master/Parent package.
When it comes to delivering the SSIS parent package I want to be able to change the connection string once and have this trickle down to all child packages. Other developers will be building and testing without using the master package and want to be able to develop these in isolation.
I've seen many articles on XML config/parameter mappings etc. but I've not seen any definitive guide on how this should be done & what is best practice.
The project we have created also only allows packages to be linked in the solution as an external reference rather than as project links (is this the legacy format?). I'm wondering if this type of project could hamper the ability to achieve shared connection strings.
Answering this myself for reference. Basically there is no streamlined way of doing this in the Package Deployment model. It is much easier to achieve this using the Project Deployment model which is the default in VS2012. However, we don't have this luxury.
I had to create some parent variables contained in the master package. These are then set to the XML config. The child packages then have direct config links to the parent variables, with the target properties mapped to the connection string properties of the connection managers.