Centralized access for config value management - node.js

We are using NodeJS as our codebase and all our config value is stored as process.env.variable1
Since our codebase is managed by AWS opsworks it takes almost 10 minutes to deploy config change on 1 machine and we have 23 machine likewise, is there any way through which all config values are stored at a centralised place and code access them without having latency also if there is any auto-refresh mechanism present so that we access the new config value in realtime.

Related

How to increase active connections in AWS RDS or how to upgrade from current DB instance?

I have deployed my MERN stack app on AWS EC2 and have done clustering but my RDS is 2CPU and 8GB ram now with the increase in traffic my DB instance gives an error of maximum connections so how can I increase connection or upgrade my RDS instance?
Do I have to reconfigure RDS Settings as my website is in production so I don't want it to go down? Kindly Guide me.
You haven't specified what DB engine you are using so it's difficult to give a firm answer but, from the documentation,:
The maximum number of simultaneous database connections varies by the DB engine type and the memory allocation for the DB instance class. The maximum number of connections is generally set in the parameter group associated with the DB instance. The exception is Microsoft SQL Server, where it is set in the server properties for the DB instance in SQL Server Management Studio (SSMS).
Assuming that you are not using MSSQL, you have a few different options:
Create a new ParameterGroup for your RDS instance, specifying a new value for max_connections (or whatever the appropriate parameter is called).
Use a different instance class with more memory as this will have a higher default max_connections value.
Add a read-replica.
Make code changes to avoid opening so many connections.
1 and 2 will require a change to be made to your database in a maintenance window so there would be downtime. It sounds like you have a single RDS instance so it's possible to upgrade without downtime. The process is backup-db -> restore-db to new instance -> upgrade restored instance -> change application to use restored instance (you will need to manage any writes done between backup + switchover yourself).
3 is only relevant if the issue is that the number of connections are making SELECT queries. If this is an issue you would need to update connection strings to use the read-replica.
4 is a huge scope but it's probably where I would start (e.g. could you use connection pooling, or cache data to reduce the number of connections?).

How to access the credentials across DAG tasks in airflow without using connections/variables

Consider I am having multiple DAG in Airflow.
Every task in the DAG tries to execute presto queries, I just override the get_conn() method in the airflow. On each call of the get_conn() method, it gets credentials from the AWS secrets manager.
The maximum request to the secrets manager is 5000. In this case, I need to cache my credentials somewhere(Should not use Connections/Variables, DB, S3), so that they can be used across all tasks without calling the secrets manager.
My question here is,
Is there any way we can handle those credentials in our code with Python/Airflow by calling get_conn() at once?
You could write your own custom secret backend https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html#roll-your-own-secrets-backend extending the AWS one and overriding the methods to read the credentials and store it somewhere (for example in local file or a DB as caching mechanism).
If you are using local filesystem however, you have to be aware that your caching reuse/efficiency will depends on how your tasks are run. If you are running a CeleryExecutor, then such local file will be available for all processes running on the same worker (but not to celery processes running on other workers). If you are running KubernetesExecutor, each task runs in it's own Pod, so you'd have to mount/map some persistent or temporary storage to inside your PODS to reuse it. Plus you have to somehow solve the problem of concurrent processes writing there and refreshing such cache periodically or when it changes.
Also you have to be extra careful as it brings some issues regarding the security as such local cache will be available to all DAGs and python code run in tasks even if they are not using the connection (so for example Airflow 2.1+ built-in automated secret masking will not work in this case and you have to be careful not to print the credentials to logs.

Change failover time for aws kcl

AWS recommends to increase failover time for KCL (kinesis), if apps with connectivity issues.
https://docs.aws.amazon.com/streams/latest/dev/troubleshooting-consumers.html
But I can’t find how failover time can be changed.
I’m looking for (one or all):
settings in AWS console
settings for the node.js kcl package
settings by Terraform
The failover time is a configuration option for the Kinesis Client Library. It is not a property on the stream. As a result, you cannot change it in the AWS console.
Configuring AWS Kinesis Client library for Node.js is done using property files. I assume you already have a property file otherwise you wouldn't be able to start up your consumer application. What you need to do is add this to your property file:
# Fail over time in milliseconds.
failoverTimeMillis = 10000
See this sample property file provided by the library:
https://github.com/awslabs/amazon-kinesis-client-nodejs/blob/master/samples/basic_sample/consumer/sample.properties#L38
Also see this documentation for more detail on how to change the property file:
https://docs.aws.amazon.com/streams/latest/dev/kinesis-record-processor-implementation-app-nodejs.html#kinesis-record-processor-initialization-nodejs

Set Heroku environment variable without restarting app

Is it possible to set a Heroku environment variable without restarting the app?
My app connects out to different online services via OAuth2. For each service I connect to, I need to set an OAuth2 ID and secret. To keep these configuration variables outside of my code, I'm using environment variables, and reading them in on process.env (node.js).
Each time I add a new service to my app, I need to add the corresponding environment variables for the ID and secret. I need to do this before pushing the latest code, so that when the app next starts up with the new service connection, the OAuth2 ID and secret variables are available.
Currently my workflow is as follows:
Set the environment variables using the Heroku toolbelt: heroku config:set <SERVICE>_ID=foo <SERVICE>_SECRET=bar
Push the latest code: git push heroku master
Currently, both of these operations will restart the app. I'd really prefer the first operation to not restart the app, as the changes to these config vars don't need to take effect until step 2). By restarting at step 1) my app will experience unnecessary downtime.
So, is there any way to prevent step 1) from restarting the app?
According to this article it pretty explicitly states that
Whenever you set or remove a config var, your app will be restarted.
Personally I also wish there was a way to do what you're asking. On larger apps, a system-wide hard restart can be painful when you have many process types running. Many times I set environment variables that aren't crucial for the app to grab ahold of immediately, such as that involving future functionality, or settings that are OK having the old value but you want the new value to take effect in a rolling-restart fashion.
At the present, is not possible to avoid the app restart. But you can use the command heroku config:edit to edit your env at once or even paste a new env set, avoiding many restarts.
In according to the heroku config help:
(...)
COMMANDS
config:edit interactively edit config vars
config:get display a single config value for an app
config:set set one or more config vars
config:unset unset one or more config vars
So you can run
heroku config:edit
Additionally, you might want to take a look on this issue (proposal):
https://github.com/heroku/cli/issues/1570

Dynamically setting bucket time-to-live on Riak and Bitcask with riak-js

Is it possible to change the expiry_secs parameter on bitcask buckets dynamically? Calling riak.saveBucket('bucket', {expiry_secs: 60}); will cause subsequent calls of riak.getBucket('bucket') to report 60 as the key ttl, but keys never seem to expire.
Is there a separate setting that needs to be modified, or can expiry_secs only be set in Riak's app.config and not from a client application?
Unfortunately, no. Bitcask handles expiry at the backend level, not the bucket level. When Riak starts, each Bitcask backend reads the current expiry_secs from the application environment and stores it in its internal state. While you can change the setting using the set_env function, the backends would not recognize that until something caused them to restart.

Resources