I am using AWS Elastic cache redis to few data using node.js. Is there any way to connect aws redis using ARN config instead of HOST.
No, it's not possbile. The point is: why should you want this?
Using hostname let your application:
resolve the host name with the proper ip address, using a RW access and a readonly, if needed
change number and type of nodes without any impact on you application
Maybe your problem is that you are using the node hostname, for example:
myredisnode.jltg1a.0001.euw1.cache.amazonaws.com
instead of the balancer hostname:
myredisnode.jltg1a.ng.0001.euw1.cache.amazonaws.com
myredisnode-ro.jltg1a.ng.0001.euw1.cache.amazonaws.com
Related
I'm new with golang.
I've been looking documentation of lambda-golang-AWS and still had this timeout in when invoking the function.
I've been configuring:
Elasticache cluster (1 primary node),
VPC (one same VPC for redis and lambda),
Security groups,
Subnets,
Inbound and outbound,
role
I have this primary Redis endpoint xxxxxx
I just need an example.
So, my questions are:
Can we connect Redis in Linux without an EC2 instance? Possibly try it with RDM.
How do we put AWS redis's endpoint in the main function? (do we only need the endpoint? or else)
Is it possible to connect to Redis Elasticache with only endpoint (without AUTH)?
Thanks a lot!
Can we connect Redis in Linux without an EC2 instance?
Yes, of course, why would an EC2 instance be an additional requirement? You just need to include a Redis client library in your Lambda function's deployment artifact, and configure the Elasticache cluster to allow inbound traffic from the security group assigned to the Lambda function.
How do we put AWS redis's endpoint in the main function? (do we only
need the endpoint? or else)
I would configure the endpoint as one of the Lambda function's environment variables.
Is it possible to connect to Redis Elasticache with only endpoint
(without AUTH)?
If you don't enable AUTH on Elasticache, then you can connect without AUTH. AUTH is an optional configuration setting.
I've created an Aurora MySQL serverless db cluster in AWS and I want to connect to it from my computer using mySQL Workbench. I've entered the endpoint as well as master user and password, however when I try to connect , it hangs for about one minute and then it says that cannot connect (no further info is given).
Also trying to ping the endpoint, it resolves the name but don't get any answer.
I've read all the documentation from AWS but I really cannot find how to connect. In the vpc security group I've enabled all inbound and outbound traffic on all ports and protocols. The AWS doc says to enable public access in DB settings but I cannot find such an option.
You can't give an Amazon Aurora Serverless V1 DB cluster a public IP address. You can access an Aurora Serverless V1 DB cluster only from within a virtual private cloud (VPC), based on the Amazon VPC service. For Aurora Serverless V2 you can make a cluster public. Make sure you have the proper ingress rules set up and enable public access in database configuration. For more information, see Using Amazon Aurora Serverless.
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-private-public-endpoints/ .
Is it possible to connect Aurora ( MySQL ) using the jdbc driver endpoint using workbench or any other tool from my local machine.
Of course, yes. It is same as a usual Aurora database but is serverless. You can connect it by using workbench or any JDBC driver. However, the serverless Aurora cannot be assigned by a public ip, which means that the DB is not accessible from the outside of VPC. Since it is private, you cannot access it directly.
In order to access a private DB, you need a proxy instance, EC2 instance inside of the same VPC with public ip or you can use AWS Direct Connect.
There is some explanation about AWS Direct Connect that can be used to resolve your case.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/aurora-serverless.html
This will explain how to connect the private RDS from local by pass the public EC2.
https://medium.com/#carlos.ribeiro/connecting-on-rds-server-that-is-not-publicly-accessible-1aee9e43b870
.
For those who don't want to use EC2 as a proxy and need a solution without using Direct Connect:
Have a look at Amazon Client VPN. Using this tool (in the VPC service), you can configure a connection to the VPC where the database is located and connect to it through VPN.
Here is a guide how to configure Client VPN: https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html#cvpn-getting-started-certs
I want to create a website that can only be accessed when using a specific VPN service. I'm using nodeJS to create the server. I don't have any specific VPN service to be used right now but i'm open to suggestions. Is there anyway to achieve that?
You can use AWS EC2, VPC and Direct Connect to solve this problem.
Run your nodejs server on EC2 and use Direct Connect to connect to your server.
If you want to maintain all this by yourself, you can use open-vpn between your server and to your local network.
The easiest way I could think to achieve this is by getting tge VPN's ip address range and then creating firewall rules that whitelist those ip ranges
Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.