How do we invoke a function app running locally (VS) from a Logic App(in Azure)?
And also what's the best practice to debug during dev(like setting up break points..) for cloud solutions?
There are a couple of ways to go about it
For pull-type triggers like service bus queue/subscription, there is nothing required apart ensuring connectivity to service bus from your local environment.
For HTTP or Event Grid triggers, you would have to expose them directly or indirectly somehow.
A direct exposure would involve setting up port forwarding on your router and use your public IP address in your HTTP action. This would require you having control over your local network, which might be the case for many corporate networks.
An indirect exposure would be to use a service like ngrok or localtunnel.
As for debugging, locally with one of the above solutions is the simplest and efficient way for Azure Functions during development.
Related
I am working on building a platform which should be able to run the code provided by a user using AWS lambda. Main requirement here is to make sure that user's code is wrapped around my own code/runtime in a way that I have full control over what outbound requests are allowed (for eg. only allow requests to google.com, github.com etc.) and also monitor them (for billing purpose lets say). I came across some solutions which either use a third-party offering or configure an egress proxy in VPC.
I was wondering whether there is some simpler alternative (and more cost effective) like using lambda extensions or bundling my function as a docker image in a way that all requests are routed to a central proxy (like nginx) where I can implement my custom whitelisting and monitoring logic. One way might have been be to set env variables like http_proxy but seems like not all libraries honour that (for eg. http module in nodejs).
I have a c# app that runs as "server" for a client app (Electron).
The c# does the data crunching and serves the data over HTTP to the JS client.
The webendpoint is implmented using Microsoft Owin and WebAPI.
It works very well, however, I do not want the port to be bound on the network interface at all, only on the "loopback".
The binding is done as described in owin docs as
WebApp.Start<MyConfig>("http://localhost:10000");
I choose a high port number to avoid being Admin.
This works well, however, the port is open from outside too, albeit, http requests from outside are rejected with bad request (which is good for me), but i dont want to bind at all.
I cant seem to find anyway to do this, any idea ?
I have build a node js code for an API server. Part of one feature is that when it starts, it should be able to know its own IP, despite the type of setup of the server where it is running.
The classic scenario is not that hard (I think). There are several options, like using the os module and find the ip or the external interface. I am sure there are other ways and some might be better, but this is the way I have been doing it so far. Feel free to add alternatives as informative as possible.
There is this case that I stumbled on. In one case, the web server was running on a google cloud instance. This instance has two IPs, one internal and one external. What I want is the external IP. However, when I use the method above, the actual external IP is not part of the object returned. The internal IP is declared as being considered as non-internal. Even when I run different commands from within the server command line, the only IP returned is the one that is actually internal and cannot be used to access the node server.
From what I understand, the instance itself is not aware of it's external IP. There might be a dns (I think) that redirects requests made to the external IP towards the correct instance.
While reading in the internet I read that problems getting the server's correct external IP might also rise when using load balancing or proxies.
The solution I thought about is to have the node js code make a request towards a service that I will build. This service will treat the node js servers as clients, and will return their external IPs. From experiments that I have done, the req object contains among others the information of the client's IP. So I should check first req.connection.remoteAddress and then the first element of req.headers['x-forwarded-for']. Ideally the server would make a request towards itself, but
I know there are external API like https://api.ipify.org?format=json that do just that - return the actual IP. But I would very much like to have the node js servers independent of services I cannot control.
However, I really am hoping that there are better solutions out there than making a request from the server which returns the server IP.
However, I really am hoping that there are better solutions out there
than making a request from the server which returns the server IP.
It is not really possible, you always rely on some kind of external observer / external request.
While reading in the internet I read that problems getting the
server's correct external IP might also rise when using load balancing
or proxies.
This is because not in all scenarios your own device is able to be self-aware of its external ip. There might be sitting behind some network, that means external address assigned to devices that forwards the WAN to it. (example : router) so when you try to obtain external ip from the devices interface itself, you end up obtaining an ip but inside the scope of the routers LAN and not the one used for external requests .
So if you really want to
Have a method to use in all scenarios
Not rely on 3rd party services
Only Solution :
Build your own ip echo service (you maintain and can use for future projects).
I have a local service fabric cluster which has 6-7 custom http endpoints exposed. I use fiddler to redirect these to my service like so:
127.0.0.1:44300 identity.mycompany.com
127.0.0.1:44310 docs.mycompany.com
127.0.0.1:44320 comms.mycompany.com
etc..
I've never deployed a cluster in azure before, so there's some intricacies that i'm not familiar with and I can't find any documentation on. I've tried a multiple times to deploy and tinker with the load balancers/public ips with no luck.
I know DNS CNAMES can't specify ports, so I guess that I have to have separate public IP for each hostname I want to use and then somehow internally map that to the port. So i end up with something like this:
identity.mycompany.com => azure public ip => internal redirect / map => myservicefabrichostname.azure.whatever:44300
my questions are:
1) is this the right way to go about it? or is there some fundamental method that i'm missing
2) do I have to specify all these endpoints (44300, 44310, 44320...) when creating the cluster (it appears to set up a load of load balancer rules/probes) or will this be unnecessary if I have multiple public IPs), i'm unsure if this is for internal or external access.
thanks
EDIT:
looks like the azure portal is broken :( been on phone with microsoft support and it looks like it's not displaying the backendpools in the load balancer correctly, so you can't set up any new nat rules.
Might be able to write a powershell script to get round this though
EDIT 2:
looks like Microsoft have fixed the bug in the portal, happy times
Instead of using multiple ip addresses you can use a reverse proxy. Like HAProxy, IIS (with rewriting), the built-in reverse proxy, or something you build yourself or reuse. The upside of that is that is allows for flexibility in adding and removing underlying services.
All traffic would come in on one endpoint, and then routed in the right direction (your services running on various ports inside the cluster). Do make sure your reverse proxy is high available.
Hope someone can help enlighten me on this issue. I am currently working on a lambda function that utilizes the cloud watch scheduler to check various devices and it is using elasticache to maintain a simple database on the readings.
My problem is that after I shut down my testing at night. I fire up the lambda function in the morning and the function has lost access to the internet. Which is represented by the function timing out. Regularly after a few hour of messing around with my routes and my vpc settings it will start working again. Just to break the following day. Sometimes it works with nat gateway other times with just a nat instance. The changes I typically make to the vpc set up are minor. The pattern for the set up I use is one public and one private and one natgateway.
Update: After not being able to access the internet from my VPC all day yesterday, today is functioning fine. What did I do differently, nothing. When it stops functioning again, probably later today, I will be calling up AWS to see if we can get to the bottom of this.
I've just fixed the same issue with my lambdas - the issue was that I had set the lambda to run in all of my subnets (I have 2 private and 1 public). This knowledgebase article specifies you should run them in private subnets only, which makes sense:
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Go to your lambda page on the AWS console, deselect the public subnet and save and the problem should be solved.
It sounds like it is due to the ephemeral port range that AWS Lambda uses. I recommend you check all Network ACLS (NACLS) to ensure that they allow communication on the ephemeral port range used by Lambda:
AWS Lambda functions use ports 1024-65535
So this means that when your lambda runs, it may use any port in this range to send communication to the internet. Even though the destination is port 80 or 443, the sending port will be in this ephemeral range, so when the internet server responds it will send the response back to the originating ephemeral port. Ensure your NACLS allow the communication for this ephemeral range (inbound or outbound or both depending on your use case) or you might be blocked depending on which ephemeral port is used. This article has a useful explanation: https://www.whizlabs.com/blog/ephemeral-ports/
A Lambda function with VPC access will require a NAT gateway to access the internet. You state that it sometimes works with only an Internet Gateway, but that isn't possible according to the AWS documentation. If you are removing the NAT gateway, or the VPC's route to the NAT gateway, then that would remove internet access from any Lambda functions that have VPC access enabled.