Read-replica endpoint connection to ECS - amazon-rds

I have a Java application that is running on an ECS cluster and is already using an RDS Postgre instance. Sometimes I need to scale RDS horizontally (add read replicas), so I created a lambda that adds/removes a load dependent read replica.
How can I add a read replica endpoint to my application (there will be more than 2 endpoints here (main instance and read replicas))?
I just search for the answer, not really sure how to do this

Related

Aurora Serverless Mysql Read Replica

How to separate read traffic from the rds database. Serverless doesn't have read replica. Already the production load is too high and it is not able to handle at the moment. I have to implement business reports. Can use clone? Any advice on this?
Since Amazon Aurora serverless option does not have separate replicas, you can manually scale up the cluster with more ACUs to get more resources for the workload.
Clone would have initially the same data, but as the data changes on the source cluster, clone would divert from the same and might not help with the purpose.
If the workload is predictable, and needs more replicas, I would recommend switching to an Aurora provisioned cluster where you can create upto 15 replicas.

Amazon rds Aurora clone/restore point in time API

When I try to use clone/restore point in time from amazon console. It clones cluster as well as all the instances which belongs to that. But when I consume the same functionality using amazon API, it clones only cluster alone.
Is there any other API to clone cluster alone with their instances, security/parameter group and other settings?
Console adds a convenience layer where in it internally makes multiple API calls to make the experience better. Restoring from a snapshot or from point in time is done in 2 steps:
RestoreDBClusterFromSnapshot or RestoreDBClusterToPointInTime API - To create a new cluster, backed by a new distributed aurora volume. No DB instances are added when then API is issued.
CreateDBInstance API - To add instances to the cluster.
So in short, if you want to do it via CLI, you need to issue both these API calls. The same is true while creating a cluster with instances as well. Console would create a cluster and add instances in the same UX workflow, but behind the scenes, it is actually issuing a CreateDBCluster API followed by one or more CreateDBInstance API call(s).
Hope this helps.

presto + how to manage presto servers stop/start/status action

we installed the follwing presto cluster on Linux redhat 7.2 version
presto latest version - 0.216
1 presto coordinator
231 presto workers
on each worker machine we can use the follwing command in order to verify the status
/app/presto/presto-server-0.216/bin/launcher status
Running as 61824
and also stop/start as the follwing
/app/presto/presto-server-0.216/bin/launcher stop
/app/presto/presto-server-0.216/bin/launcher start
I also searches in google about UI that can manage the presto status/stop/start
but not seen any thing about this
its very strange that presto not comes with some user interface that can show the cluster status and do stop/start action if we need to do so
as all know the only user interface of presto is show status and not have the actions as stop/start
in the above example screen we can see that the active presto worker are only 5 from 231 , but this UI not support stop/start actions and not show on which worker presto isn't active
so what we can do about it?
its very bad idea to access each worker machine and see if presto is up or down
why presto not have centralized UI that can do stop/start action ?
example what we are expecting from the UI , - partial list
.
.
.
Presto currently uses discovery service where workers announce themselves to join the cluster, so if a worker node is not registered there is no way for coordinator or discovery server to know about its presence and/or restart it.
At Qubole, we use an external service alongside presto master that tracks nodes which do not register with discovery service within a certain interval. This service is responsible for removing such nodes from the cluster.
One more thing we do is use monit service on each of presto worker nodes, which ensures that presto server is restarted whenever it goes down.
You may have to do something similar for cluster management , as presto does not provide it right now.
In my opinion and experience managing prestosql cluster, it matters of service discovery in architecture patterns.
So far, it uses following patterns in the open source release of prestodb/prestosql:
server-side service discovery - it means a client app like presto cli or any app uses presto sdk just need to reach a coordinator w/o awareness of worker nodes.
service registry - a place to keep tracking available instances.
self-registration - A service instance is responsible for registering itself with the service registry. This is the key part that it forces several behaviors:
Service instances must be registered with the service registry on startup and unregistered on shutdown
Service instances that crash must be unregistered from the service registry
Service instances that are running but incapable of handling requests must be unregistered from the service registry
So it keeps the life-cycle management of each presto worker to each instance itself.
so what we can do about it?
It provides some observability from presto cluster itself like HTTP API /v1/node and /v1/service/presto to see instance status. Personally I recommend using another cluster manager like k8s or nomad to manage presto cluster members.
its very bad idea to access each worker machine and see if presto is up or down
why presto not have centralized UI that can do stop/start action ?
No opinion on good/bad. Take k8s for example, you can manage all presto workers as one k8s deployment and manage each presto worker in one pod. It can use Liveness, Readiness and Startup Probes to automate the instance lifecycle with a few YAML code. E.g., the design of livenessProbe of helm chart stable/presto. And cluster manageer like k8s does provide web UI so that you can touch resources to act like an admin. . Or you can choose to write more Java code to extend Presto.

RDS Cluster and DB Instance concept

I need to create the RDS Aurora 5.7 database. I think I am not clear on the RDS concept. Is this the correct hierarchy? aws_rds_cluster -> aws_rds_cluster_instance -> aws_db_instance I should need to define all of the above since I kinda stuck on the configuration so I try to clarify the concept
A "classic" RDS instance is defined in Terraform as an aws_db_instance. This is either single-AZ or multi-AZ, but it defines the entire cluster and the instances that comprise the cluster. Since you want Aurora, this is not what you want based on your question.
You want an aws_rds_cluster which defines the entire cluster, then at least one aws_rds_cluster_instance which defines instances. The aws_rds_cluster_instance then defines which cluster it is a part of with the cluster_identifier argument.
Clusters provide the storage backend where your live data and automated backups reside. The global parameter group (parameters that must be the same among all instances using that storage backend) are set at this level as well.`
Instances are servers running a copy of MySQL with access to the storage backend. They have instance parameter groups which define parameters that are ok to be different between instances. Right now you can only have 1 writer instance per cluster plus multiple reader instances, although Amazon is working on multi-master which would allow multiple writer instances.
You can add/remove instances at will, but once you delete the cluster itself your storage (and all automatic snapshots!) go away. Take manual snapshots to keep copies of your data that will not disappear if the cluster is deleted.

Persistent data on filesystem in service fabric

I am wrapping elastic search on windows in a service fabric stateless service, such it runs the elastic search node on each node that the service is running.
Elastic Search is distributed in the code package and can be updated with the application.
So in a 3 node service fabric, the elastic node will get the name of each node in service fabric.
What would be the best approach to locate the data for elastic node?
My own idea would be to locate it on the VM temp disks, as long as more nodes are up, then elastic should replicate data internal such one node can die.
Then i would also do daily backups copying all data to external storage to be able to restore it.
Is there any other options that I should consider?

Resources