Terraspace using public modules - terraform

recently I found the terraspace framework which is wonderful. I followed the tutorial but now I have a concern in how to work with public modules, for example, I want to create a gcp compute using this module: https://registry.terraform.io/modules/terraform-google-modules/vm/google/4.0.0

Terraspace uses Terraform HCL, so you can use the module source keyword to include 3rd party modules, public or private. The #T.H. link provides a good intro using the module source keyword.
Additionally, Terraspace provides a Terrafile concept that centralizes and automates the management of external modules. You can use any module you want, private or public. Introduction blog post: Terraspace Terrafile: Using Git and Terraform Registry Modules
It also has several examples showing how to use modules with Terraspace, including terraspace-google-vm

Related

What's the equivalent of Terraform modules in Pulumi

I've been trying to find the answer of this question regarding the equivalent of Terraform modules in Pulumi, the closest answer is this link to this blog. Bear in mind that I'm a beginner for using Pulumi too.
With Terraform, you can create a git repository containing all of your modules, version it and pull it into various other git repositories by using source = "git#github.com:xyz". Terraform allows you also to turn On and Off resources based on conditionals such as regions, account number or environments for instance (with count method for modules and resources).
Apparently Pulumi does not have this concept, looks like you need to duplicate your code in each repository or create a giant monolith repository containing all of your code. I'm also wondering what's the best practice for feature flags, turning On and Off resources for each of your specific stacks, what sort of conditionals you'll be using for this.
Thanks again for your highlights!
Broadly, you should create libraries in your language of choice and put reusable functions, classes, and components in there. For example, if you use TypeScript, you could create an NPM module (public or private) with any code that you want to reuse across projects and teams.
More specifically, if you are looking for a way to combine multiple resources into a reusable higher-level abstraction, you can implement it as a Component Resource. A component would accept inputs, instantiate a few resources in its constructor, and return output values. You can still package a component (or multiple components) as a reusable library.
Pulumi also allows you to create multi-language components, where you would implement it in one language but then publish it in all the supported languages for everyone to use. You can ship a multi-language component as a package in Pulumi registry to simplify discovery and installation. Read more in Pulumi Packages and multi-language Components and see other component in the Registry.
I wrote an article about implementing feature flags with pulumi here. In summary you should use the Pulumi configuration system. You can store configuration values in the stack settings file, which are automatically named Pulumi..yaml. You should split resources into their own python modules which you can then optionally import based on the settings in the config files.

Create Sphinx autodoc for a package loading pywin32 on Linux

I wrote a package that uses pywin32 to sync GitLab issues with Microsoft Projects.
I would like to use readthedocs to host the documentation.
The problem is that I can't install pywin32 as a Linux environment is used there.
Any suggestion on how to get autodoc to build the documentation if a package is not available on the build machine?
The easiest way to solve this is setting autodoc_mock_imports = ["pywin32"]. This only requires setting the root package and any subsequent use your code makes of the library (calling or importing submodules, classes, etc) will be mocked by Sphinx.
Notice the mocked library won't provide any functionality besides allowing its components to be declared and importable. So your code should be structured to not have any module level execution depending on pywin32 callables, because the returns of those callbles will also be mocked.
I also found this guide that elaborates focused on ReadTheDocs builds and it suggests the same. Parallel to this problem I found the extension sphinxcontrib-mockautodoc that addresses the problem of making builds when you simultaneously have different Python versions in a given project.
I found a more or less ugly solution: Mocking the used modules and functions.
Needs quite some manual work.
Create a mocking folder in your project and create modules, classes and function stub for each class/function used.
After this edit the doc/conf.py and add:
try:
import win32com
except ImportError:
sys.path.insert(0, os.path.join(__location__, '../mocking'))
to automatically load the mocking if the real package is not available (and only then!).
Even so the solution is quite cumbersome it has one advantage. It allows for static typing, that would not be possible without.

Store terraform module in a private registry benefit?

We have multiple aws modules in git and when we use a module in other project we specify the path of the module in git as a source like this:
module "module_name" {
source = "git::https://gitlab_domain_name.com/terraform/modules/aws/module_name.git?ref=v1.0.0"
...
}
I want to know if there is a benefit to use a terraform private registry to store our modules like for instance when developing in Java we use a repository to store JAR packaged or also when working with docker images.
Yes, there are benefits to a private registry. Namely you can put some description, documentation, examples there and you get a better visual representation of what the module does, its inputs, outputs and resources.
But apart from that in terms of functionality of the module it behaves the same way. A java registry for example (e.g. nexus) makes sense because you do not want to force everyone to build the libs themselves (maybe they can't at all) and therefore having a place where pre-built libraries are stored does make sense. That reasoning does not apply to terraform since there is nothing being compiled.
Whole different story for custom providers, in that case you need a private registry to provide the compiled golang binaries, but you can write one yourself without too much effort (since terraform 0.13), it is just a http rest api.

Custom Modules in Kinvey

Inside a custom endpoint in Kinvey, I see the modules parameter which exposes inbuilt modules like so:
function onRequest(request, response, modules) {
}
I could see from the documentation here that Kinvey has some existing inbuilt functions
http://devcenter.kinvey.com/rest/reference/business-logic/reference.html#modules
My questions are,
Is it possible to have our own custom reusable modules defined somewhere in Kinvey and use it within the custom endpoint function above? If so how?
Is it possible to define (similar to package.json) and use external npm packages within the above custom endpoint function?
Great to see that you show interest in using Kinvey!
Regarding your questions - yes, if I got you correctly both are possible. See below for further explanations...
You can implement Common Code, and use it to create reusable functions which can be used across your business logic scripts. Please refer to the following link for more information.
You can implement Kinvey Flex Services, which are low code, lightweight NodeJS micro-services that are used for data integrations and functional business logic. FlexServices utilize the Flex SDK and can consist of FlexData for data integrations, FlexFunctions for trigger-based data pre/post hooks or custom endpoints, and FlexAuth for custom authentication through Mobile Identity Connect (MIC). Please refer to the following link for more information.
I hope, I have informed you well.
No, this is not possible in the free tier, in Business Logic you are limited to using the modules that are explicitly whitelisted.
There are options to run any node code (including any npm module you want) inside the platform in the paid "Business Edition".

How to generate node.js API docs using swagger

I have an application developed with Node.js/expressjs. It's working fine. Now I need to generate API document using Swagger. There is a module swagger-node. Do I need to re-write the whole app using this module or is there any other solution to use this module and what is the use of swagger-ui if using swagger-node.
Not from what I can tell. You should be able to generate your swagger project as described, and then just make sure that the information in the yaml file points to the actual controllers and methods that your code uses.
You can create a standalone yaml file that is compliant with Swagger/OpenAPI which can therefore be rendered into Swagger documentation. The Swagger-UI is useful for creating this yaml file. Swagger also offers various tools for testing APIs and generating code -- to use these effectively you will need a method for integrating the controller/model definitions in your yaml file into your existing codebase.
To achieve this integration I typically expose my existing codebase as an api of controller functions -- then import it as a module into the code generated by my documented API. This allows me to trust my API documentation without the burden of porting my whole codebase into Swagger's required directory structure. I believe this is the best currently available approach but is not always worthwhile.

Resources