How to add a security group to an existing RDS with CDK without cyclic-dependency - amazon-rds

I have a multi-stack application where I want to deploy an RDS in one stack and then in a later stack deploy a Fargate cluster that connects to the RDS.
Here is how the rds gets defined:
this.rdsSG = new ec2.SecurityGroup(this, `ecsSG`, {
vpc: props.vpc,
allowAllOutbound: true,
});
this.rdsSG.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(5432), 'Ingress 5432');
this.aurora = new rds.ServerlessCluster(this, `rds`, {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
vpc: props.vpc,
securityGroups: [this.rdsSG],
// more properties below
});
With that add ingress rule everything is fine, since both the RDS and Fargate are in the same VPC, I can communicate fine. It worries me making that open the world even though its in its own VPC.
const ecsSG = new ec2.SecurityGroup(this, `ecsSG`, {
vpc: props.vpc,
allowAllOutbound: true,
});
const service = new ecs.FargateService(this, `service`, {
cluster,
desiredCount: 1,
taskDefinition,
securityGroups: [ecsSG],
assignPublicIp: true,
});
How can I remove the ingress rule and allow inbound connections to the RDS from that ecsSG since it gets deployed later? If I try to call the following command from the deploy stack, I get a cyclic dependency error:
props.rdsSG.connections.allowFrom(ecsSG, ec2.Port.allTcp(), 'Aurora RDS');
Thanks for your help!

This turned out to be easier than I thought- you can just flip the connection so that rather than trying to modify the rds to accept a security group of the ecs, you use the allowTo to establish a connection to the rds instance.
ecsSG.connections.allowTo(props.rds, ec2.Port.tcp(5432), 'RDS Instance');

Also maybe the other way round the RDS security group might be better described by aws_rds module rather than aws_ec2 module https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_rds/CfnDBSecurityGroup.html (couldn't post a comment due to low rep)

Just as an additional possibility here. What works for me is that I don't need to define any security group. Just the service and the db, and connect the two:
const service = new ecsPatterns.ApplicationLoadBalancedEc2Service(
this,
'app-service',
{
cluster,
...
},
);
const dbCluster = new ServerlessCluster(this, 'DbCluster', {
engine: dbEngine,
...
});
dbCluster.connections.allowDefaultPortFrom(service.service);

Related

aws lambda node js not starting ec2 instance

I am writing a ec2 scheduler logic to start and stop ec2 instances.
The lambda works for stopping instances. However the start function is not initiating ec2 start.
The logic is to filter based on tags and status of ec2 and start or stop based on current status.
Below is the code snippet to start EC2 instances. But this isn't starting the instances.
The filtering happens correctly and pushes the instances to "stopParams" object.
The same code works if I change the logic to ec2.stopInsatnces by filtering the running state instances. The role has permissions to start and stop .
Any ideas why its not triggering start ?
if (instances.length > 0){
var stopParams = { InstanceIds: instances };
ec2.startInstances(stopParams, function(err,data) {
if (err) {
console.log(err, err.stack);
} else {
console.log(data);
}
context.done(err,data);
});
Finally got this working. There were no issues with the nodejs lambda code. Even though was able to stop instances but start instances were not invoking the start method. Found that all volumes are encrypted.
To start an instance using API call the lambda role used by lambda should have permission to kms key which is used for encrypting the volume. After adding the lambda role arn in the principal section of kms key policy permission the lambda was able to start instances. But key permission is not necessary for stopping the instance. Hope this helps

Spawn containers on ACI using #azure/arm-containerinstance

I am working on processing data microservice. I have this microservice dockerized and now I want to deploy it. To achieve it, I am trying to manage containers in Azure Container Instances using Azure Function written in node.js.
The first thing I wanted to test is spawning containers within a group. My idea was:
const oldConfig = await client.containerGroups.get(
'resourceGroup',
'resourceName'
);
const response = await client.containerGroups.createOrUpdate(
'resourceGroup',
'resourceName',
{
osType: oldConfig.osType,
containers: [
...oldConfig.containers,
{
name: 'test',
image: 'hello-world',
resources: {
requests: {
memoryInGB: 1,
cpu: 1,
},
},
},
],
}
);
I've added osType, because docs and interface says it's required, but when I do this I receive error 'to update osType you need to remove and create group containers". When I remove osType, request is successful, but ACI does not change. I cannot recreate whole group upon every new container, because I want them to process jobs and terminate by themselves.
Not all the properties are supported to update. See the details below:
Not all container group properties can be updated. For example, to
change the restart policy of a container, you must first delete the
container group, then create it again.
Changes to these properties require container group deletion prior to
redeployment:
OS type CPU, memory, or GPU resources Restart policy Network profile
So the container group will not change after you update the osType. You need to delete the container group and create it with the changes. Get more details about the Update.

Cron cluster with one Redis instance and multiple server roles

I am trying to create a clustered Cronjob using
cron-cluster and redis
on Node.js. We are running multiple servers with different roles (production, staging, test) and the same codebase. They are all connected to the same Redis instance.
How do I make cron-cluster run only once on each role?
Currently, it picks only one server across the whole fleet (production, staging, test) and runs everything there.
I had the same issue with cron-cluster library.
To solve it, you should pass a unique key, while initializing an instance. For example:
const ClusterCronJob = require('cron-cluster')(redisClient, { key: leaderKey }).CronJob;
Where { key: leaderKey } can be taken as follow:
const leaderKey = process.env.NODE_ENV;

Round Robin for gRPC (nodejs) on kubernetes with headless service

I have a a 3 nodejs grpc server pods and a headless kubernetes service for the grpc service (returns all 3 pod ips with dns tested with getent hosts from within the pod). However all grpc client request always end up at a single server.
According to https://stackoverflow.com/a/39756233/2952128 (last paragraph) round robin per call should be possible Q1 2017. I am using grpc 1.1.2
I tried to give {"loadBalancingPolicy": "round-robin"} as options for new Client(address, credentials, options) and use dns:///service:port as address. If I understand documentation/code correctly this should be handed down to the c-core and use the newly implemented round robin channel creation. (https://github.com/grpc/grpc/blob/master/doc/service_config.md)
Is this how round-robin load balancer is supposed to work now? Is it already released with grpc 1.1.2?
After diving deep into Grpc-c core code and the nodejs adapter I found that it works by using the option key "grpc.lb_policy_name". Therefore, constructing the gRPC client with
new Client(address, credentials, {"grpc.lb_policy_name": "round_robin"})
works.
Note that in my original question I also used round-robin instead of the correct round_robin
I am still not completely sure how to set the serviceConfig from the service side with nodejs instead of using client (channel) option override.
I'm not sure if this helps, but this discussion shows how to implement load balancing strategies via grpc.service_config.
const options = {
'grpc.ssl_target_name_override': ...,
'grpc.lb_policy_name': 'round_robin', // <--- has no effect in grpc-js
'grpc.service_config': JSON.stringify({ loadBalancingConfig: [{ round_robin: {} }] }), // <--- but this still works
};

Link server with PG RDS from AWS

I am trying to get the React-fullstack seed running on my local machine, the first things I want to do is connect the server with a database. in the config.js file there exists this line:
export const databaseUrl = process.env.DATABASE_URL || 'postgresql://demo:Lqk62xgfsdm5UhfR#demo.ctbl5itzitm4.us-east-1.rds.amazonaws.com:5432/membership01';
I do not believe I have access to the account created in the seed so I am trying to create my own AWS PG RDS. I have the following information and can access more:
endpoint: my110.cqw0hciryhbq.us-west-2.rds.amazonaws.com:5432
group-ID: sg-1422f322
VPC-ID: vpc-ec22d922
masterusername: my-username
password: password444
according the the PG documentation I should be looking for something like this:
var conString = "postgres://username:password#localhost/database";
I currently have:
`postgres://my-username:password444#my110.cqw0hciryhbq.us-west-2.rds.amazonaws.com:5432`
What do I put in for 'database'?
Can someone share a method to ping the DB from the seed on my local machine to see if they are connected and working properly?
I can't really speak to anything specific to the React package, however generally when connecting to a Postgres server (whether RDS or your own install), you connect with the name of the database at the end of the connection string, hence:
postgres://username:password#hostname:port/databaseName
So, when you created the RDS database (I assume you already spun up RDS??), you had to tell RDS what you wanted to call the database. If you spun up RDS already, login to AWS console, go to RDS, go to your RDS instances and then select the correct instance, click "Instance Actions" and then "See Details". That page will show you a bunch of details for your RDS instance, one of which is "DB Name". That's the name you put in the connection string.
If you have not already spun up your own RDS instance, then go ahead and do so and you will see where it asks for a database name that you specify.
Hope that helps, let me know if it doesn't.

Resources