Azure - secure communication between internal roles in Azure - azure

In this link (Azure: security between web roles) the OP asks: "In Azure, if you choose to use internal endpoint (instead of input endpoint), https is not an option. http & tcp are the only options. Does it mean internal endpoint is 100% secure and you don't need encryption"
the answer he gets is: No, a web/worker role cannot connect to an internal endpoint in another deployment
My question is possible at all to deploy such a solution?
Thanks
Joe

There are two separate things you brought up in your question.
Internal endpoints are secure in that the only other VM instances that can access these are within the same deployment. If, say, a web app needs to talk to a WCF service on a worker role instance, it can direct-connect with a tcp or http connection, with no need for encryption. It's secure.
Communication between deployments requires a Virtual Network, as internal endpoints are not accessible outside the boundary of the deployment. You can connect two deployments via Virtual Network, and a that point each of the virtual machine instances in each deployment may see each other. The notion of endpoints is moot at this point, as you can simply connect to a specific port on one of the server instances.

Related

How to add multiple web apps into a single network so that vnet connected to one web app get accessed to other web apps as well

Purpose:
I am cracking an implementation done in Azure 2 years by my former coworker. I am not sure how I can make 3 different web apps use 3 unique DNS servers.
What do I need to achieve:
I have a virtual network in Azure which I have given DNS server as custom (I have added IP in VM DNS settings). I also have got 3 web apps under the same app server. Now I have connected VNet to web-app-1 via Networking settings.
I also need to link web-app-2 and web-app-3 to Vnet.
What I need to achieve:
Is it possible to make web-app-2 and web-app-3 access Vnet without adding that VNet to these web apps directly? Is there any way to make web-app-2 and web-app-3 to link with web-app-1 which is already connected to Vnet? Or is it only possible with connecting vnet to all web-apps via networking?
You can connect app 2 & 3 (or all apps) via hybrid connection to a VM on a vNET(or on-prem network) . That way the apps have access to resources on the same network as the VM since the VM becomes a proxy.
https://learn.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections
Within App Service, Hybrid Connections can be used to access application resources in other networks. It provides access from your app to an application endpoint. It does not enable an alternate capability to access your application. As used in App Service, each Hybrid Connection correlates to a single TCP host and port combination. This means that the Hybrid Connection endpoint can be on any operating system and any application, provided you are accessing a TCP listening port.

Disable Microservice initial exposed port after configuring it in a gateway

Hello I've been searching everywhere and did not found a solution to my problem, which is how can I access my API through the gateway configured endpoint only, currently I can access to my api using localhost:9000, and localhost:8000 which is the Kong gateway port, that I secured and configured, but what's the point of using this gateway if the initial port is still accessible.
Thus I am wondering is there a way to disable the 9000 port and only access to my API with KONG.
Firewalls / security groups (in cloud), private (virtual) networks and multiple network adapters are usually used to differentiate public vs private network access. Cloud vendors (AWS, Azure, etc) and hosting infrastructures usually have such mechanisms built in, e.g. Kubernetes, Cloud Foundry etc.
In a productive environment Kong's external endpoint would run with public network access and all the service endpoints in a private network.
You are currently running everything locally on a single machine/network, so your best option is probably to use a firewall to restrict access by ports.
Additionally, it is possible to configure separate roles for multiple Kong nodes - one (or more) can be "control plane" nodes that only you can access, and that are used to set and review Kong's configuration, access metrics, etc.
One (or more) other Kong nodes can be "data plane" nodes that accept and route API proxy traffic - but that doesn't accept any Kong Admin API commands. See https://konghq.com/blog/separating-data-control-planes/ for more details.
Thanks for the answers they give a different perspectives, but since I have a scalla/play microservice, I added a special Playframework built-in http filter in my application.conf and then allowing only the Kong gateway, now when trying to access my application by localhost:9000 I get denied, and that's absolutely what I was looking for.
hope this answer gonna be helpful for future persons in this same situation.

How do you set up Azure load balancing for micro-services?

We've got an API micro-services infrastructure hosted on Azure VMs. Each VM will host several APIs which are separate sites running on Kestrel. All external traffic comes in through an RP (running on IIS).
We have some API's that are designed to accept external requests and some that are internal APIs only.
The internal APIs are hosted on scalesets with each scaleset VM being a replica that hosts all of the internal APIs. There is an internal load balancer(ILB)/vip in front of the scaleset. The root issue is that we have internal APIs that call other internal APIs that are hosted on the same scaleset. Ideally these calls would go to the VIP (using internal DNS) and the VIP would route to one of the machines in the scaleset. But it looks like Azure doesn't allow this...per the documentation:
You cannot access the ILB VIP from the same Virtual Machines that are being load-balanced
So how do people set this up with micro-services? I can see three ways, none of which are ideal:
Separate out the APIs to different scalesets. Not ideal as the
services are very lightweight and I don't want to triple my Azure VM
expenses.
Convert the internal LB to an external LB (add a public
IP address). Then put that LB in it's own network security
group/subnet to only allow calls from our Azure IP range. I would
expect more latency here and exposing the endpoints externally in
any way creates more attack surface area as well as more
configuration complexity.
Set up the VM to loopback if it needs a call to the ILB...meaning any requests originating from a VM will be
handled by the same VM. This defeats the purpose of micro-services
behind a VIP. An internal micro-service may be down on the same
machine for some reason and available on another...thats' the reason
we set up health probes on the ILB for each service separately. If
it just goes back to the same machine, you lose resiliency.
Any pointers on how others have approached this would be appreciated.
Thanks!
I think your problem is related to service discovery.
Load balancers are not designed for that obviously. You should consider dedicated softwares such as Eureka (which can work outside of AWS).
Service discovery makes your microservices call directly each others after being discovered.
Also take a look at client-side load balancing tools such as Ribbon.
#Cdelmas answer is awesome on Service Discovery. Please allow me to add my thoughts:
For services such as yours, you can also look into Netflix's ZUUL proxy for Server and Client side load balancing. You could even Use Histrix on top of Eureka for latency and Fault tolerance. Netflix is way ahead of the game on this.
You may also look into Consul.io product for your cause if you want to use GO language. It has a scriptable configuration for better managing your services, allows advanced security configurations and usage of non-rest endpoints. Eureka also does these but requires you add a configuration Server (Netflix Archaius, Apache Zookeeper, Spring Cloud Config), coded security and support accesses using ZUUL/Sidecar.

Azure Web Role Internal Endpoint - Not Load Balanced

The Azure documentation says that internal endpoints on a web role will not be load balanced. What is the practical ramification of this?
Example:
I have a web role with 20 instances. If I define an internal endpoint for that web role, what is the internal implementation? For example, will all 20 instances still service this end point? Can I obtain a specific endpoint for each instance?
We have a unique callback requirement that could be nicely served by utilizing the normal load balancing behavior on the public endpoint, but having each instance expose an internal endpoint. Based on the published numbers for endpoint limits, this is not possible. So, when defining an internal endpoint, is it "1 per instance", or what? Do all of the role instances service the endpoint? What does Microsoft mean when they say that the internal endpoint is not load balanced? Does all the traffic just flow to one instance? That does not make sense.
First let's clarify the numbers and limitations. The limitations for EndPoints is for Roles, not for Instances. If you are not sure, or still confusing Roles and Instances terms, you can check out my blog post on that. So, the limit is Per Role(s).
Now the differences between the EndPoints - I have a blog post describing them here. But in a quick round, Internal EndPoint will only open communication internally within the deployment. That's why it is Internal. No external traffic (from Internet) will be able to go to an Internal Endpoint. In that terms, it is not load balanced, because no traffic goes via/through a load balancer! The traffic of internal EndPoints only goes between Role Intances (eventually via some internal routing hardwere) but never lives a deployment boundaries. Having said that, it must already be clear that no Internet traffic can be sent to an Internal EndPoint.
A side note - InputEndpoint however is discoverable from Internet and from Inside the deployment. But it is LoadBalanced, since the traffic to an InputEndpoint comes via/through the LoadBalancer from the Internet.
Back to the Numbers. Let's say you have 1 WebRole with 1 Input EndPoint and 1 Internal EndPoint. That makes a total of 2 EndPoints for your deployment. Even if you spin up 50 instances, you sill have just 2 EndPoints that count toward the total EndPoints limit.
Can you obtain a specific EndPoint for Specific Instace - certainly yes! via the RoleEnvironemnt class. It has Roles enumeration. Each Role has Instances, and each Instance has InstanceEndpoints.
Hope this helps!
The endpoints are defined at the role level and instantiated for each instance.
An input endpoint has a public IP address making it accessible from the internet. Traffic to that input endpoint is load-balanced (with a round-robin algorithm) among all the instances of the role hosting the endpoint.
An internal endpoint has no public IP address and is only accessible from inside the cloud service OR a virtual network including that cloud service. Windows Azure does not load balance traffic to internal endpoints - each role instance endpoint must be individually addressed. Ryan Dunn has a nice post showing a simple example of implementing load balanced interaction with an internal endpoint hosting a WCF service.
The Spring Wave release introducted a preview of an instance input endpoint, which is a public IP endpoint that is port forwarded to specific role instance. This, obvously, is not load balanced but provides a way to directly connect to a specific instance.
Just trying to make things more concise and concrete:
// get a list of all instances of role "MyRole"
var instances = RoleEnvironment.Roles["MyRole"].Instances;
// pick an instance at random
var instance = instances[new Random().Next(instances.Count())];
// for that instance, get the IP address and port for the endpoint "MyEndpoint"
var endpoint = instance.InstanceEndpoints["MyEndpoint"].IPEndpoint;
Think of internal endpoints as a discovery mechanism for finding your other VMs.

How can I convey this to CorpIT?

My Azure web role can, using remote desktop, connect with a browser (IE) to google.com and to a DMZ server on our corporate network.
My web role cannot connect via HTTP GET (IE) to a non-DMZ box behind the firewall. My web role cannot ping this box either. My service is hosted in north/central, allegedly all published IP ranges of north/central have been granted access to the target IP by our CorpIT people. They claim they are seeing no traffic via their sniffer from my compute instance IP when I attempt to ping or HTTP GET against the target local IP.
CorpIT wants help from the Microsoft side but we have no Microsoft relationship. I'm convinced this is the outcome of months of slapdash thirdhand firewall rules applied to the target environment in question. What can I do to further elucidate this for CorpIT?
thx in advance!
You can try to run a trace route or get a network trace from the Azure instance and see what you get back from where. You could also create a support case with microsoft:
https://support.microsoft.com/oas/default.aspx?&c1=501&gprid=14928&&st=1&wfxredirect=1&sd=gn
I wouldn't bet on using the IP ranges to make your applications work correctly. Windows Azure already provides you with some services that allow you to solve these types of issues:
Windows Azure Connect: Allows you to create an IPSec secured connection between your servers and your hosted services. This means you won't need to add rules to the firewall for incoming traffic.
Windows Azure Service Bus Relay: Allows you to expose WCF services to the cloud without having to add rules to the firewall for incoming traffic. Choosing this option might add some extra work for you to do, you might need to create a WCF service if you don't already have one and change the code in your Web Role to connect to this WCF Service.

Resources