AWS VPC Lambda Function keeps losing internet access - node.js

Hope someone can help enlighten me on this issue. I am currently working on a lambda function that utilizes the cloud watch scheduler to check various devices and it is using elasticache to maintain a simple database on the readings.
My problem is that after I shut down my testing at night. I fire up the lambda function in the morning and the function has lost access to the internet. Which is represented by the function timing out. Regularly after a few hour of messing around with my routes and my vpc settings it will start working again. Just to break the following day. Sometimes it works with nat gateway other times with just a nat instance. The changes I typically make to the vpc set up are minor. The pattern for the set up I use is one public and one private and one natgateway.
Update: After not being able to access the internet from my VPC all day yesterday, today is functioning fine. What did I do differently, nothing. When it stops functioning again, probably later today, I will be calling up AWS to see if we can get to the bottom of this.

I've just fixed the same issue with my lambdas - the issue was that I had set the lambda to run in all of my subnets (I have 2 private and 1 public). This knowledgebase article specifies you should run them in private subnets only, which makes sense:
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Go to your lambda page on the AWS console, deselect the public subnet and save and the problem should be solved.

It sounds like it is due to the ephemeral port range that AWS Lambda uses. I recommend you check all Network ACLS (NACLS) to ensure that they allow communication on the ephemeral port range used by Lambda:
AWS Lambda functions use ports 1024-65535
So this means that when your lambda runs, it may use any port in this range to send communication to the internet. Even though the destination is port 80 or 443, the sending port will be in this ephemeral range, so when the internet server responds it will send the response back to the originating ephemeral port. Ensure your NACLS allow the communication for this ephemeral range (inbound or outbound or both depending on your use case) or you might be blocked depending on which ephemeral port is used. This article has a useful explanation: https://www.whizlabs.com/blog/ephemeral-ports/

A Lambda function with VPC access will require a NAT gateway to access the internet. You state that it sometimes works with only an Internet Gateway, but that isn't possible according to the AWS documentation. If you are removing the NAT gateway, or the VPC's route to the NAT gateway, then that would remove internet access from any Lambda functions that have VPC access enabled.

Related

Change Tiny Proxy IP per request on EC2

I'm fairly new to Proxy servers and how they work exactly.
I recently span up an AWS EC2 instance to act as a proxy server using tiny proxy. Everything seems to work just fine however i am curious about something. Is it possible to configure tiny proxy to use a different public IP each time it makes a request ? I looked into AWS Elastic IP's but don't quite understand how those might fit in this scenario.
Not possible. Public IPs are allocated to the instance during launch. You can allocate multiple IPs using Elastic IPs like you mentioned but you can't get IPs per request like you asked. What's your use case?

Invoke Funtion App(running locally) from Logic App(Azure)

How do we invoke a function app running locally (VS) from a Logic App(in Azure)?
And also what's the best practice to debug during dev(like setting up break points..) for cloud solutions?
There are a couple of ways to go about it
For pull-type triggers like service bus queue/subscription, there is nothing required apart ensuring connectivity to service bus from your local environment.
For HTTP or Event Grid triggers, you would have to expose them directly or indirectly somehow.
A direct exposure would involve setting up port forwarding on your router and use your public IP address in your HTTP action. This would require you having control over your local network, which might be the case for many corporate networks.
An indirect exposure would be to use a service like ngrok or localtunnel.
As for debugging, locally with one of the above solutions is the simplest and efficient way for Azure Functions during development.

How to map unique dns names to service fabric port

I have a local service fabric cluster which has 6-7 custom http endpoints exposed. I use fiddler to redirect these to my service like so:
127.0.0.1:44300 identity.mycompany.com
127.0.0.1:44310 docs.mycompany.com
127.0.0.1:44320 comms.mycompany.com
etc..
I've never deployed a cluster in azure before, so there's some intricacies that i'm not familiar with and I can't find any documentation on. I've tried a multiple times to deploy and tinker with the load balancers/public ips with no luck.
I know DNS CNAMES can't specify ports, so I guess that I have to have separate public IP for each hostname I want to use and then somehow internally map that to the port. So i end up with something like this:
identity.mycompany.com => azure public ip => internal redirect / map => myservicefabrichostname.azure.whatever:44300
my questions are:
1) is this the right way to go about it? or is there some fundamental method that i'm missing
2) do I have to specify all these endpoints (44300, 44310, 44320...) when creating the cluster (it appears to set up a load of load balancer rules/probes) or will this be unnecessary if I have multiple public IPs), i'm unsure if this is for internal or external access.
thanks
EDIT:
looks like the azure portal is broken :( been on phone with microsoft support and it looks like it's not displaying the backendpools in the load balancer correctly, so you can't set up any new nat rules.
Might be able to write a powershell script to get round this though
EDIT 2:
looks like Microsoft have fixed the bug in the portal, happy times
Instead of using multiple ip addresses you can use a reverse proxy. Like HAProxy, IIS (with rewriting), the built-in reverse proxy, or something you build yourself or reuse. The upside of that is that is allows for flexibility in adding and removing underlying services.
All traffic would come in on one endpoint, and then routed in the right direction (your services running on various ports inside the cluster). Do make sure your reverse proxy is high available.

UrlFetchApp.fetch on Google Spreadsheets cannot connect AWS backend webservice

We have an EC2 instance on AWS which we deployed our backend services to. We started by using Google Spreadsheets (scripted with Google Apps Script) to present our backend, through a webservice deployed on our server. We have a specific port from which https (uses a self-signed certifiate) protocol is used to serve the webservice encrypted while on flight. We had set up Security Groups (basically a firewall entry group) which include following CIDR ranges for that specific ingress port of our webservice:
64.18.0.0/20
64.233.160.0/19
66.102.0.0/20
66.249.80.0/20
72.14.192.0/18
74.125.0.0/16
173.194.0.0/16
207.126.144.0/20
209.85.128.0/17
216.58.192.0/19
216.239.32.0/19
as described in https://developers.google.com/apps-script/guides/jdbc#setup_for_google_cloud_sql
This setup was working fine until 5 days ago. Then something weird happened. When we run the script behind the spreadsheet from 'Script Editor' code
works fine and requests to our webservice return successfully. But when the exact same code was invoked through a menu item, it was not doing anything. After long frustrating investigation we found out that request was not even reaching our server (there were numerous other quirky symptoms like only last log command was visible on 'Execution Transcript' even though there should have been many others). Then we tried replacing the security group with a rule that accepts from any ip but to specific port everything was working fine again.
Here is a link to seemingly relevant issue in google-apps-script-issues page:
https://code.google.com/p/google-apps-script-issues/issues/detail?id=4679#c8
We ran tcpdump tcp port <port> -i eth0 -vv and observed that when we run the code from 'Script Editor' request was made from 66.102.7.156 (and from similar ips, which are in 66.102.0.0/20), when code is invoked from menu item in spreadsheet the request was made from 72.14.199.55 (and from similar ips, which are in 72.14.192.0/18). This one seems to be the problematic ip range.
My question is, why is it the case that when request sources are correctly included in firewall rules one block of ips don't work and starts to work when ip restriction on the port is lifted (source ip 0.0.0.0/0)? Is it a bug of Security Groups in AWS? Or are we doing something wrong? Also if our approach is not adequate in any way, alternative solutions or suggestions would be much appreciated.
As per the issues you linked to, there was a bug in Apps Script that lead to this behavior. That bug has now been fixed.

Azure InputEndpoints block my tcp ports

My Azure app hosts multiple ZeroMQ Sockets which bind to several tcp ports.
It worked fine when I developed it locally, but they weren't accessible once uploaded to Azure.
Unfortunately, after adding the ports to the Azure ServiceDefinition (to allow access once uploaded to azure) every time I am starting the app locally, it complains about the ports being already in use. I guess it has to do with the (debug/local) load balancer mirroring the azure behavior.
Did I do something wrong or is this expected behavior? If the latter is true, how does one handle this kind of situation? I guess I could use different ports for the sockets and specify them as private ports in the endpoints but that feels more like a workaround.
Thanks & Regards
The endpoints you add (in your case tcp) are exposed externally with the port number you specify. You can forcibly map these endpoints to specific ports, or you can let them be assigned dynamically, which requires you to then ask the RoleEnvironment for the assigned internal-use port.
If, for example, you created an Input endpoint called "ZeroMQ," you'd discover the port to use with something like this, whether the ports were forcibly mapped or you simply let them get dynamically mapped:
var zeromqPort = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["ZeroMQ"].IPEndpoint.Port;
Try to use the ports the environment reports you should use. I think they are different from the outside ports when using the emulator. The ports can be retrieved from the ServiceEnvironment.
Are you running more than one instance of the role? In the compute emulator, the internal endpoints for different role instances will end up being the same port on different IP addresses. If you try to just open the port without listening on a specific IP address, you'll probably end up with a conflict between multiple instances. (E.g., they're all trying to just open port 5555, instead of one opening 127.0.0.2:5555 and one openining 127.0.0.3:5555.)

Resources