I'm using a PHP APP, BoxBilling. It takes orders from final users, these orders need to be processed into actual nodes and containers.
I was planning on using Terraform as the provisioner for both, containers whenever there is room available in existing nodes or new nodes whenever the existing ones are full.
Terraform would interface with my provider for creating new nodes and with Vagrant for configuring containers.
Vagrant would interface with Kubernetes to provision the pods/containers.
Question is: Is there an inbound Terraform API that I can use to send orders to Terraform from the BoxBilling APP?
I've searched the documentation, examples and case studies but it's eluding me...
Thank you!
You could orchestrate the provisioning of infrastructure and/or configuration of nodes using an orchestration/CI tool such as Jenkins.
Jenkins has a Remote Access API which could be called to trigger a set of steps which could include Terraform plan, apply, creation of new workspaces etc and then downstream to configuration, testing and anything else in your toolchain.
Related
We are using Azure cloud platform and used Terraform to provision our resources using Azuredevops pipelines. So when we provisioned the resources we kept the statefiles resource wise(eg:ApIM, Appservice, AKS, Storage Accounts, etc..) and the statefiles where in sync with the actual resource.
But we have other ADO pipelines as part of our application releases, which are making some changes on the previous terraform built resources like (API creation and update, Tags update to resources, additional component creation to the base resource etc..). So those changes made our terraform states out of sync the actual resources and when we triggered the pipeline for terraform plan for those resources, number of changes are showing and some resources are showing to replace itself.
So, We need to make our existing resources statefile in sync with any kind of pipeline\manual changes from the portal and we have to follow the practice of incrementally updating the statefile.
So by searching in internet we found that we can achieve this using terraformer and planning to add a Pipeline for terraformer task that will update those changes to the existing statefiles for each resource (planning to schedule this pipeline weekly).
Is it possible to use terrafomer to make the incremental changes with both statefile and already used terraform manifests in sync.
Your using of Terraform for IaaC seems wrong. If you plan to deploy resources through terraform then those resources should not be modified by other external factors. If it does, then you lose one of terraform's key feature i.e, maintaining a state and updating the resources.
Terraformer is a completely different tool that is not suited for your direct usecase. It is used to generate tf files/state files for existing resources created via methods other than terraform (Eg: console)
My recommendation for you will be to go through the basics of terraform, Iaac and restructure your pipelines/architecture.
Below are some links that were helpful for me.
https://developer.hashicorp.com/terraform/language/state
https://www.terraform-best-practices.com/examples
https://developer.hashicorp.com/terraform/language/modules
I am creating some VMs in Azure using Azure CLI. These VMs require different setups. For example, one machine needs to be set up as a domain controller and therefore its setup includes activities such as creating domain users, etc. While the activities for other VMs include things like joining the domain, set up fire share, etc. Currently, any activity on the individual VMs is performed manually. However, I would like to automate that process starting from creating the VMs and then performing setup on individual VM. What could be the best way of doing it? Can this type of setup on individual VMs be performed remotely?
You will want to look at the Azure Desired State Configuration (DSC) extension. DSC is a declarative platform used for configuration, deployment, and management of systems. It consists of three primary components:
Configurations are declarative PowerShell scripts which define and
configure instances of resources. Upon running the configuration, DSC
(and the resources being called by the configuration) will simply
"make it so", ensuring that the system exists in the state laid out
by the configuration. DSC configurations are also idempotent: the
Local Configuration Manager (LCM) will continue to ensure that
machines are configured in whatever state the configuration declares.
Resources are the "make it so" part of DSC. They contain the code
that put and keep the target of a configuration in the specified
state. Resources reside in PowerShell modules and can be written to
model something as generic as a file or a Windows process, or as
specific as an IIS server or a VM running in Azure.
The Local
Configuration Manager (LCM) is the engine by which DSC facilitates
the interaction between resources and configurations. The LCM
regularly polls the system using the control flow implemented by
resources to ensure that the state defined by a configuration is
maintained. If the system is out of state, the LCM makes calls to the
code in resources to "make it so" according to the configuration.
An example Azure ARM template that uses DSC to stand up a domain controller can be seen here:
https://github.com/Azure/azure-quickstart-templates/tree/master/active-directory-new-domain
Further Reading
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview
https://learn.microsoft.com/en-us/powershell/scripting/dsc/overview/overview?view=powershell-7.1
I would like to tweak some settings in AKS node group with something like userdata in AWS. Is it possible to do in AKS?
how abt using
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_scale_set_extension
The underlying Virtual Machine Scale Set (VMSS) is an implementation detail, and one that you do not get to adjust outside of SKU and disk choice. Just like you cannot pick the image that goes on the VMSS; you also cannot use VM Extensions on that scale set, without being out of support. Any direct manipulation of those VMSSs (from an Azure resource provider perspective) behind your nodepools puts you out of support. The only supported affordance to perform host (node)-level actions is via deploying your custom script work in a DaemonSet to the cluster. This is fully supported, and will give you the ability to run (almost) anything you need at the host level. Examples being installing/executing custom security agents, FIM solutions, anti-virus.
From the support FAQ:
Any modification done directly to the agent nodes using any of the IaaS APIs renders the cluster unsupportable. Any modification done to the agent nodes must be done using kubernetes-native mechanisms such as Daemon Sets.
CloudFormation doesn't provide tools for orchestrating deployment of several/many stacks. For example consider a microservice/layered architecture where many stacks need to be deployed together to replicate an environment. With cloudformation you need to use a tool like stacker or something home grown to solve the problem.
Does Terraform offer a multi-stack deployment orchestration solution?
Terraform operates on a directory level so you can simply define both stacks at the same place as a big group of resources or as modules.
In Terraform, if you need to deploy multiple resources together at the same time then you would typically use a module and then present a smaller surface area for configuring that module. This also extends to creating modules of modules.
So if you had a module that you created that deployed one service that contained a load balancer, service of some form (such as an ECS task definition; Kubernetes pod, service, deployment etc definition; an AMI) and a database and another module that contained a queue and another service you could then create an over-arching module that contains both of those modules so they are deployed at the same time with a smaller amount of configuration that may be shared between them.
Modules also allow you to define the source location as remote such as a git location or from a Terraform registry (both the public one or a private one) which means the Terraform code for the modules don't have to be stored in the same place or checked out/cloned into the same directory.
I'm currently setting up a project where my Terraform state is split into multiple remote-states (core, database, vpc, etc), and all of these are split by environment as well. I store these all in an infrastructure git repo, where all of my terraform state for all of services and infrastructure are managed. This includes core building blocks, as well as separate state that are needed for particular services.
My question is: can i access the variables from those remote states (core, database, etc) without git-checking-out the projects from which those states were created? ie. from the command-line?
Here's an example structure.
git repos:
service_1
application code
dockerfile
infrastructure
terraform
global
production
vpc
db
service_1
staging
vpc
db
service_1
So if im writing glue scripts in my service_1 project, and need access to something from the database remote state (which was created by terraform in the infrastructure project, can i access that data without git-checking-out the infrastructure project, running terraform init, etc?
EDIT: updating with more specific use-cases per the comments
Currently, I use Terraform to setup some basic infrastructure in the core module, which has it's own remote state. This sets up Internet Gateways, VPCs, NAT Gateways, Subnets, etc. I then store these as outputs in the state.
Once the core is up, I use kops to setup Kubernetes clusters. To setup the clusters, i need the VPC IDs, Internet Gateways, Availability Zones, etc, which is why I want to access them in the Terraform Output