I'm new with golang.
I've been looking documentation of lambda-golang-AWS and still had this timeout in when invoking the function.
I've been configuring:
Elasticache cluster (1 primary node),
VPC (one same VPC for redis and lambda),
Security groups,
Subnets,
Inbound and outbound,
role
I have this primary Redis endpoint xxxxxx
I just need an example.
So, my questions are:
Can we connect Redis in Linux without an EC2 instance? Possibly try it with RDM.
How do we put AWS redis's endpoint in the main function? (do we only need the endpoint? or else)
Is it possible to connect to Redis Elasticache with only endpoint (without AUTH)?
Thanks a lot!
Can we connect Redis in Linux without an EC2 instance?
Yes, of course, why would an EC2 instance be an additional requirement? You just need to include a Redis client library in your Lambda function's deployment artifact, and configure the Elasticache cluster to allow inbound traffic from the security group assigned to the Lambda function.
How do we put AWS redis's endpoint in the main function? (do we only
need the endpoint? or else)
I would configure the endpoint as one of the Lambda function's environment variables.
Is it possible to connect to Redis Elasticache with only endpoint
(without AUTH)?
If you don't enable AUTH on Elasticache, then you can connect without AUTH. AUTH is an optional configuration setting.
Related
So I'm stuck in a problem, the hassle is I'm getting a connection timeout error after connecting to an elasticache endpoint using aws lambda and nodejs.
My aws lambda function is not using any vpc but an elasticache function of course has VPC and I already made it public by setting up the inbound traffic and outbound traffic rules.
Also I tried it on my local server and was not able to find elasticache endpoint msg by using OVPN.
How to connect redis from elasticache in nodejs.
I would really appreciate it if any of the people can give me a helping hand to solve this problem.
Thanks
I am using AWS Elastic cache redis to few data using node.js. Is there any way to connect aws redis using ARN config instead of HOST.
No, it's not possbile. The point is: why should you want this?
Using hostname let your application:
resolve the host name with the proper ip address, using a RW access and a readonly, if needed
change number and type of nodes without any impact on you application
Maybe your problem is that you are using the node hostname, for example:
myredisnode.jltg1a.0001.euw1.cache.amazonaws.com
instead of the balancer hostname:
myredisnode.jltg1a.ng.0001.euw1.cache.amazonaws.com
myredisnode-ro.jltg1a.ng.0001.euw1.cache.amazonaws.com
Is it possible to connect Aurora ( MySQL ) using the jdbc driver endpoint using workbench or any other tool from my local machine.
Of course, yes. It is same as a usual Aurora database but is serverless. You can connect it by using workbench or any JDBC driver. However, the serverless Aurora cannot be assigned by a public ip, which means that the DB is not accessible from the outside of VPC. Since it is private, you cannot access it directly.
In order to access a private DB, you need a proxy instance, EC2 instance inside of the same VPC with public ip or you can use AWS Direct Connect.
There is some explanation about AWS Direct Connect that can be used to resolve your case.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/aurora-serverless.html
This will explain how to connect the private RDS from local by pass the public EC2.
https://medium.com/#carlos.ribeiro/connecting-on-rds-server-that-is-not-publicly-accessible-1aee9e43b870
.
For those who don't want to use EC2 as a proxy and need a solution without using Direct Connect:
Have a look at Amazon Client VPN. Using this tool (in the VPC service), you can configure a connection to the VPC where the database is located and connect to it through VPN.
Here is a guide how to configure Client VPN: https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html#cvpn-getting-started-certs
Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.
Was not able to find any security groups for AWS Lambda.
Is there a way to allow access from AWS Lambda to RDS without alowing all IPs (0.0.0.0/0) and without allowing all Amazon IP Range?
As #user5919440 suggests, now that this new feature is out:
https://aws.amazon.com/blogs/aws/new-access-resources-in-a-vpc-from-your-lambda-functions/
...you simply need to tell AWS Lambda which VPC subnets to bind to your function. The function then can communicate with any AWS service that also has access to that subnet.
This means that you should be able to add a security group in your RDS that allows traffic from the same internal subnet (10.x.x.x) that your Lambda function is bound to.
This feature is out as of yesterday
https://aws.amazon.com/blogs/aws/new-access-resources-in-a-vpc-from-your-lambda-functions/
There currently isn't, and a moment's reflection suggests that if there were, if would be a false sense of security -- the traffic wouldn't be assured to be from your Lambda functions... but from anybody's -- the IP addresses are pooled.
There have been hints of a future mechanism to allow a cleaner trust relationship between Lambda and VPC, perhaps implemented with the VPC Endpoint feature (currently available only with S3), or perhaps differently... no details have been forthcoming, so far.