Dynamically generating etcd discovery url with terraform - terraform

I have a terraform project that sets up a multi-node etcd cluster using etcd discovery url mechanism, with following setup:
main.tf (suppose to dynamically generate etcd discovery url and pass generated url as input variable to module below)
etcd module
cloud-init sh script (receives discovery url variable and start etcd cluster nodes using this url)
The problem: I want to generate new etcd discovery url dynamically every time after running "terraform apply" (based on input variable, say numEtcdNodes) and pass generated url as variable to the cloud-init sh scripts in etcd module.
How can this be achieved?
I tried this option:
It doesn't work. The file interpolation method gets invoked first followed by generation of discovery url. So cloud-init sh scripts get old url used in previous run, so etcd cluster never comes up.

Related

How to access the credentials across DAG tasks in airflow without using connections/variables

Consider I am having multiple DAG in Airflow.
Every task in the DAG tries to execute presto queries, I just override the get_conn() method in the airflow. On each call of the get_conn() method, it gets credentials from the AWS secrets manager.
The maximum request to the secrets manager is 5000. In this case, I need to cache my credentials somewhere(Should not use Connections/Variables, DB, S3), so that they can be used across all tasks without calling the secrets manager.
My question here is,
Is there any way we can handle those credentials in our code with Python/Airflow by calling get_conn() at once?
You could write your own custom secret backend https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html#roll-your-own-secrets-backend extending the AWS one and overriding the methods to read the credentials and store it somewhere (for example in local file or a DB as caching mechanism).
If you are using local filesystem however, you have to be aware that your caching reuse/efficiency will depends on how your tasks are run. If you are running a CeleryExecutor, then such local file will be available for all processes running on the same worker (but not to celery processes running on other workers). If you are running KubernetesExecutor, each task runs in it's own Pod, so you'd have to mount/map some persistent or temporary storage to inside your PODS to reuse it. Plus you have to somehow solve the problem of concurrent processes writing there and refreshing such cache periodically or when it changes.
Also you have to be extra careful as it brings some issues regarding the security as such local cache will be available to all DAGs and python code run in tasks even if they are not using the connection (so for example Airflow 2.1+ built-in automated secret masking will not work in this case and you have to be careful not to print the credentials to logs.

Explicitly define order of Terraform destroy operations (several providers) when destroying stack

I have a terraform configuration that looks like this:
With provider aws, create an RDS database
With provider https://github.com/cyrilgdn/terraform-provider-postgresql, create Postgresql databases etc.
The latter is done via separate custom module.
Now, when calling terraform destroy I ended up in state where cluster is removed but databases are not deleted, and TF complains with error
Error: error detecting capabilities: error PostgreSQL version: dial tcp: lookup [host].eu-west-1.rds.amazonaws.com on [IP]:53: no such host
which clearly suggests database entities were not removed prior to removing the cluster.
I would like to identify to Terraform that database entities must be removed before cluster itself. How could I do that?
Try depends_on meta-argument so postgreSQL module depends on RDS module.

Store Terraform "data consul_keys ..." as file?

In a base.tf file I have:
data "consul_keys" "project_emails"{
datacenter = "mine1"
key {
name = "notification_list"
path = "project/notification_email_list"
}
}
I would like to use these consul variables in my python code.
The way I'm thinking about this is by outputting this to a file. (so not just another terraform file using the "${project_emails.notification_list.construct}" with either version 11 or 12.).
How would I save all these keys to a file to access the keys?
The general mechanism for exporting data from a Terraform configuration is Output Values.
You can define an output value that passes out the value read from Consul like this:
output "project_emails" {
value = data.consul_keys.project_emails.var.notification_list
}
After you've run terraform apply to perform the operations in your configuration, you can use the terraform output command to retrieve the output values from the root module. Because you want to read it from another program, you'll probably want to retrieve the outputs in JSON format:
terraform output -json
You can either arrange for your program to run that command itself, or redirect the output from that command to a static file on disk first and then have your program read that file.
The above assumes that the Python code you mention will run as part of your provisioning process on the same machine where you run Terraform. If instead you are asking about access to those settings from software running on a virtual machine provisioned by Terraform, you could use the mechanism provided by your cloud platform to pass user data to your instance. The details of that vary by provider.
For long-lived applications that consume data from Consul, a more common solution is to run consul-template on your virtual server and have it access Consul directly. An advantage of this approach is that if the data changes in Consul then consul-template can recognize that and update the template file immediately, restarting your program if necessary. Terraform can only read from Consul at the time you run terraform plan or terraform apply, so it cannot react automatically to changes like consul-template can.

CORE_PEER_ADDRESS in chaincode-docker-devmode

I am following the tutorial Chaincode for Developers and in the section Testing Using dev mode in Terminal 2 there is the following instantiation of the environment variable
CORE_PEER_ADDRESS=peer:7052
Could you please tell me what is the purpose of this variable and why the port of the used peer is 7052?
I couldn't find in the docker-compose file a container running on this port..
Generally,chain code is going to run on containerized environment, but for dev activities like code/test/deploy, we have a sample folder called Chaincode-dev in fabric samples .It is of optimized with limited orderer,peer,cli. Normally we specify chaincode address as 7052,8052,... and chaincode will be maintained by peers (you can check these parameters in docker-composebase.yaml files ),but now here in dev --peer-chaincodedev mode , chaincode is running from user,you check parameters with chaincode won't be present,so these variables are exporting from user.

Rundeck authentication error even added correct ssh_username and password

Created a project into rundeck. Added nodes as ansible inventory which contain two remote nodes. Added ssh_username and password into that.
Now got Displaying that two nodes in the "Nodes" area while show all nodes filtering.
Created a job with node filtering (selected one remote node based on tags). Now added mkdir /etc/test2018 as a command.
Now run the job. But I got the error below:
Failed: AuthenticationFailure: Authentication failure connecting to node: "114.12.14.*".
Make sure your resource definitions and credentials are up to date.
note: I login into rundeck as admin user with default password.
Am using aws-linux servers.
Image: Rundeck Log Error Output
It seems that you have not configured access to your remote nodes. Configure the access to your remote nodes:
Check This:
https://www.youtube.com/watch?v=qOA-kWse22g
And This:
Add a remote node in rundeck

Resources