GCP serverless VPC connection make 408 timeout - node.js

My cloud functions server needs to access to external service and ip adress needs to be applied to the system prior to access to the service. Since it is serlverless, cloud functions does not have static ip adress. I have searched the way to give static ip adress to cloud functions and it was serverless VPC access at GCP VPC network.
Image below shows my VPC creation.
Cloud functions VPC connection setting is at below
Cloud functions works fine before setting VPC connection. After VPC connection setup, error below is occured.
Do I need any other setting for VPC connection? or do I need to do something with my server code to get rid of 408 timeout error?
If you know anything about this problem, please share your knowledge. Thank you

Related

How to reach S3 from a nodejs lambda which is inside a VPC?

My lambda is getting ETIMEDOUT for hostname s3.amazonaws.com. I figured out that is is probably caused by the fact that my lambda is inside a VPC.
I suspect that I need to use an AWS Endpoint to reach S3. I know that we have endpoint set up and in AWS Console I can see and endpoint associated with this VPC called dev-s3-gateway with service name com.amazonaws.us-east-1.s3
How do I tell my lambda to use this endpoint?
Inside a VPC, your Lambda can be in either a private subnet or a public subnet.
Lambda in Public Subnets CANNOT access the internet.
Having Lambda access the internet
Being in a Private Subnet, your Lambda cannot have "direct" access to the internet. You need to provision a Public Subnet in your VPC and have both an Internet Gateway as well as a NAT Gateway in that subnet.
Route the traffic to 0.0.0.0/0 from your Lambda's subnet via the NAT Gateway.
From the routing table of the Public Subnet, route traffic to 0.0.0.0/0 via the Internet Gateway.
Your Lambda should now have access to the internet (and to S3).
Feel free to also check out this more detailed guide.
Accessing AWS services through VPC endpoints
Several AWS services offer VPC endpoints. These allow you to connect and interact with the respective services without your traffic ever leaving the AWS network.
For more information on them, please check out their documentation.
EDIT: Expanding a bit on your specific S3 use case.
S3 offers Gateway and Interface VPC Endpoints. Based on the endpoint name you provided in the question, I'm going to guess this is a Gateway VPC Endpoint. Once you set up the endpoint in your VPC, the Security Group(s) associated with your Lambda must allow outbound traffic to the endpoint.
You have two options.
First and simplest (but perhaps less secure - depending on your use case), you allow outbound traffic to 0.0.0.0/0. This will effectively allow you to call anything. However, if the Lambda is in a public subnet, it won't be able to access the internet, as explained above, but rather only the Gateway VPC Endpoint ranges.
The second option is to allow outbound traffic to the known Prefix List of the regional Amazon S3 endpoint. You can retrieve the PrefixList ID (looks like pl-xxxxxx) by invoking the DescribePrefixLists and looking for that Prefix List for service com.amazonaws.us-east-1.s3. Once you have the ID of the Prefix List, you can add it to the destination of an outbound rule of your Security Group(s).

How to access elaticache publicly without being in same lambda vpc

So I'm stuck in a problem, the hassle is I'm getting a connection timeout error after connecting to an elasticache endpoint using aws lambda and nodejs.
My aws lambda function is not using any vpc but an elasticache function of course has VPC and I already made it public by setting up the inbound traffic and outbound traffic rules.
Also I tried it on my local server and was not able to find elasticache endpoint msg by using OVPN.
How to connect redis from elasticache in nodejs.
I would really appreciate it if any of the people can give me a helping hand to solve this problem.
Thanks

My API on API Gatway returns error 502 every time I make a request to an external API

I have a NodeJS Rest API that is deployed on AWS throught Serverless, which automatically creates a Lambda function and a API on API Gateway for me.
Every time I try to make a HTTPS request to any external APIs, I get an error from API Gateway (502 - Internal Server Error), even thought everything works fine when I'm testing in my local PC. And the error only happens if I call the route that makes the external request, so I'm sure that's the problem.
I've already activated API Gateway logs with Cloudwatch (following this post), but the only important log I get is Endpoint response body before transformations: {"errorMessage":"2020-10-21T18:34:14.038Z 4cf0e078-fec9-4b9c-a199-26216a3951aa Task timed out after 6.01 seconds"} (complete logs in that image). The Lambda logs are less detailed, but here they are.
I also have set up a VPC and a Security Group for my Lambda function. My Security Group already alows all trafic for both inbound and outbound rules. My VPC may be the problem, since I don't understand very much about subnets and the configurations I got there. These are my Lambda VPC configurations.
Can someone tell me what's the problem? I'm available to add any more information that you may want/need.
--------- Edit 1:
I tried to follow the steps of this post, but it didn't work. Let me explain everything I did:
First of all, I created a NAT Gateway to my VPC and a new Route Table with the 0.0.0.0/0 destination routed to this NAT Gateway. Then I created a Public Subnet, assigned the new Route Table to it and turned on the Enable auto-assign public IPv4 address option. Finally, I assigned this new Public Subnet to my Lambda function, but the error was still there. I also tried to remove the Public Subnet from the Lambda function, 'cause someone said it would work on the post, but it still didn't work.
The only thing I couldn't do was to set my new Public Subnet as a default subnet. I don't know if it was a core thing to do and if it only didn't work because of that.
Am I forgetting something?
I just solved it.
I kept searching on the internet for possible solutions and I found this link that has a video on the right corner (right there) with the perfect tutorial.
The problem was that I only had Subnets connected to a Internet Gateway and no Subnets to a NAT Gateway, like #MarkB said. But I tried to solve it by changing my only 3 Private Subnets, that were assigned to both Lambda and RDS, to connect only with the NAT Gateway and ended up removing the Internet Gateway assignment from my RDS's.
I decided to create 3 new Private Subnets ONLY for my Lambda Functions, those connected to the NAT Gateway, and 1 Public Subnet, connected to the Internet Gateway. The previous Subnets that I already had were intact in the end, and it fits just well.

How to configured Default VPC IN AWS?

i want to send mail using node mailer in NodeJS if my lambda function is developed in default VPC because I have to access RDS too from the lambda function.
I am unable to send success mail for data successfully inserted in RDS if I deployed my lambda function in default VPC WHAT changes I need to do so I can send.
IF I choose NO vpc then I am unable to set data to database.
From https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html,
When you connect a function to a VPC in your account, it does not have access to the internet unless your VPC provides access.
I take this to mean that if you wish to access both RDS and the internet from lambda from within your VPC, you need a NAT gatway (or to spin up your own instance). In other words, lambda does not support internet access with a public IP through an Internet Gateway, which is the mechanism of internet access within your vpc.
If you don't mind the cost, about 4.5 cents an hour plus data transfer last I checked, the simplest solution is probably:
add another subnet to your VPC.
Add a NAT Gateway to your VPC.
Add a route table to the subnet that routes through the NAT Gateway
put your lambda in that subnet
This essentially creates a connection to the internet in that VPC without your lambda holding a Public IP address.

How to allow azure function to access database hosted on Azure VM?

We have time triggered Azure Function deployed on portal to perform some iterative task at specific time. Our azure function uses the database deployed on Azure VM via connection string provided at AppSettings. The function throws following error on running:
MySql.Data: Authentication to host 'xxx' for user 'xxx using method 'mysql_native_password' failed with message:
Client with IP address 'x.x.x.x' is not allowed to connect to this MySQL server. MySql.Data: Client with IP address 'x.x.x.x' is not allowed to connect to this MySQL server*
When we white-list the IP mentioned in error message, the function runs successfully. But since the azure function has no determined work-station or PC of same IP that handles the execution, whenever the function runs from new IP, it throws the error back. Therefore, we require a mechanism by which we can white-list all IPs of PCs that will be running our function app OR some better mechanism to authenticate and allow azure function to access our database hosted on Azure VM.
What we tried?
We white-listed the Virtual IP address of the function app. But it doesn't work every time.
We tried to white-list the IP ranges obtained from Microsoft Datacenter of the region in which our function app is deployed. But this method also didn't work.
Azure application can't access database on Azure VM?
Thus, is there any way by which azure function can access our database deployed on Virtual machine securely?
I have opened up the issue on github but no reply yet from there.
Finally, after thorough researching, found the solution.
One need to white-list all Outbound IPs of Function App in Virtual Machine where DB is deployed. The outbound IP address can be found from resources.azure.com. On searching your resource (in my case Function app's name), there will be a long json output from where you have to pick possibleOutboundIpAddresses parameter as shown in the image. Whitelist all Ips and then your azure function app can access your database deployed on virtual machine.
Also, I searched whether these IPs tend to change on regular basis or not. While I didn't find any official word from it, but from various internet sources I came to know that even if the IPs gets planned to be changed ever, everyone will get enough notifications before that to prevent any problems.
You need to set up a Virtual Network (VNet) where both your App Service Plan hosting the Azure Function and the VM participate.
Then from Azure Functions, view All Properties > Networking and you should see the virtual network to connect.
This method doesn't require you to whitelist IP addresses for your VM and secures your VM by allowing only internal network traffice.
Note that your Azure Function must be set up on an App Service Plan, rather than on a Consumption plan.

Resources