terraform - access remote state outputs from command line - terraform

I'm currently setting up a project where my Terraform state is split into multiple remote-states (core, database, vpc, etc), and all of these are split by environment as well. I store these all in an infrastructure git repo, where all of my terraform state for all of services and infrastructure are managed. This includes core building blocks, as well as separate state that are needed for particular services.
My question is: can i access the variables from those remote states (core, database, etc) without git-checking-out the projects from which those states were created? ie. from the command-line?
Here's an example structure.
git repos:
service_1
application code
dockerfile
infrastructure
terraform
global
production
vpc
db
service_1
staging
vpc
db
service_1
So if im writing glue scripts in my service_1 project, and need access to something from the database remote state (which was created by terraform in the infrastructure project, can i access that data without git-checking-out the infrastructure project, running terraform init, etc?
EDIT: updating with more specific use-cases per the comments
Currently, I use Terraform to setup some basic infrastructure in the core module, which has it's own remote state. This sets up Internet Gateways, VPCs, NAT Gateways, Subnets, etc. I then store these as outputs in the state.
Once the core is up, I use kops to setup Kubernetes clusters. To setup the clusters, i need the VPC IDs, Internet Gateways, Availability Zones, etc, which is why I want to access them in the Terraform Output

Related

Is there a way to break down terraform(or atlantis) output to different sections?

My team use terraform and atlantis to manage the infrastructure. We have created a repo that contains all projects' infra.
When starting a new microservice, we usually do it in this way.
Draft the infra in terraform git repo.
The infra contains:
Backend
RDS
Memcached
Redis
EC2
Frontend
S3 bucket to host the static website.
CloudFront
Then, create a pull request in github and let Atlantis run atlantis apply.
Then, Atlantis displays the output in the PR.
Then, I copy different parts of terraform output to the backend team and frontend team.
Question
Does terraform output support category or section?
Or, is there an easier way to manipulate the output of atlantis and send different sections to different teams?

What if Terraform recreates a resource that is referenced inside another state?

TL;DR: What if Terraform destroys and recreates a resource that is referenced in another resource located in another state file. Would it break the reference in the cloud and cause downtime until I run apply against the other state file?
Hi. I'm using Terraform and Google Cloud Platform. I have the Terraform files related to my applications and microservices separate in each of the application repositories and I also have some other repositories exclusively for some infrastructure resources like VPC, Load Balancer and IAM.
Let's say that the Cloud Functions service that runs one of my microservices is provisioned by Terraform files that are located in the specific microservice repository, which has its own isolated remote state.
In the Load Balancer repository, I have data objects referencing that microservice's remote state so I can reference resources from that state, like the Cloud Functions service that I need to reference when creating its Network Endpoint Groups and Load Balancer backends.
After the example above, I wonder what would happen if I change something in the microservice files causing some resource to be destroyed and recreated. Since the other resources that reference it are located in different repositories and state files, I suppose I would need to run terraform init and apply inside both the repos, am I right or am I overthinking? What would you do to resolve this issue instead?
If your setup requires managing same resource from two different TF state files, then you are doing it wrong. This is not how TF should be used, and off course it will lead to the issues.
A given resource can only be managed by one TF state file.

How should I use terraform in multi pods?

My needs areļ¼š
Distributed web/RPC server, deployed on several pods
the service is used to control the terraform cli, modifying environment variables or .tf files in order to Dynamic creation/destruction of resources
To keep the state consistent, I'm using etcd v3 as a backend to the terraform, now the question is.
inconsistent .tf files on different pods, which can lead to confusion of resources
What should I do?
Eventually, I used etcd v3 as backend to sync the tf_state in each pod.

Using Terraform, how can I orchestrate deploy of many related/dependent stacks?

CloudFormation doesn't provide tools for orchestrating deployment of several/many stacks. For example consider a microservice/layered architecture where many stacks need to be deployed together to replicate an environment. With cloudformation you need to use a tool like stacker or something home grown to solve the problem.
Does Terraform offer a multi-stack deployment orchestration solution?
Terraform operates on a directory level so you can simply define both stacks at the same place as a big group of resources or as modules.
In Terraform, if you need to deploy multiple resources together at the same time then you would typically use a module and then present a smaller surface area for configuring that module. This also extends to creating modules of modules.
So if you had a module that you created that deployed one service that contained a load balancer, service of some form (such as an ECS task definition; Kubernetes pod, service, deployment etc definition; an AMI) and a database and another module that contained a queue and another service you could then create an over-arching module that contains both of those modules so they are deployed at the same time with a smaller amount of configuration that may be shared between them.
Modules also allow you to define the source location as remote such as a git location or from a Terraform registry (both the public one or a private one) which means the Terraform code for the modules don't have to be stored in the same place or checked out/cloned into the same directory.

How can I interface with terraform from an external APP?

I'm using a PHP APP, BoxBilling. It takes orders from final users, these orders need to be processed into actual nodes and containers.
I was planning on using Terraform as the provisioner for both, containers whenever there is room available in existing nodes or new nodes whenever the existing ones are full.
Terraform would interface with my provider for creating new nodes and with Vagrant for configuring containers.
Vagrant would interface with Kubernetes to provision the pods/containers.
Question is: Is there an inbound Terraform API that I can use to send orders to Terraform from the BoxBilling APP?
I've searched the documentation, examples and case studies but it's eluding me...
Thank you!
You could orchestrate the provisioning of infrastructure and/or configuration of nodes using an orchestration/CI tool such as Jenkins.
Jenkins has a Remote Access API which could be called to trigger a set of steps which could include Terraform plan, apply, creation of new workspaces etc and then downstream to configuration, testing and anything else in your toolchain.

Resources