Azure Service Fabric Windows Containers - Inter Container Communication - azure

I have created an Azure Service Fabric Cluster using "Windows 2016 DataCenter with Containers" OS and enabled Reverse Proxy listening on Port 80 on the cluster. We intend to deploy our legacy ASP.NET MVC & WCF applications on this cluster.
In our existing deployment model we have services co-located on same host communicating to each other because of chatty communication and low latency requirements. Is it possible for apps hosted in windows container to communicate with other windows container apps on the same Node? Basically I would like to have Node affinity between two windows container apps on the same Service Fabric Node.
I tried it with a sample application and it looks like the only option is to use Private IP of container which is getting dynamically assigned. Is it possible to pass --ip-address parameter while instantiating containers on Service Fabric cluster?
[Update: 04/05/2017]
Service Fabric Cluster Version: 5.5.219.0
Total NodeTypes: 1
Total Nodes: 3
Both containers are deployed on all three nodes. Azure internet load balancer is used to expose Service Fabric Reverse Proxy. All services are accessing internally as well as externally via reverse proxy.
Gaurav

There is what you can do now in the v5.5 release and what is coming in the next release.
For v5.5 Release
This GitHub repo https://github.com/Azure-Samples/service-fabric-dotnet-containers shows how to communicate between Windows Containers where you can map a host port to the private port on the container using PortBinding policy in the Application manifest.
You can then communicate either called the Naming Service over REST (this is what this sample does) or as you say and is easier, with the ReverseProxy which has a URL format. If both containers are deployed on the same machine then the communication is via the local ReverseProxy and you get local calls between the containers. See https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy. As Mikkel points out, you have to tell Service Fabric that you want the containers on the same VM, through placement constraints.
Note: for the GitHub sample above to work on your local machine you need to update the local dev manifest ClusterManifestTemplate.xml in the C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\FiveNode folder and change the IPAddressOrFQDN attribute from "localhost" to the actual IP of the machine it is running on. This is due to a Windows Server Network bug, documentated in the git repo above. i.e
<Node NodeName="_Node_0" IPAddressOrFQDN="ComputerFullName" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
becomes
<Node NodeName="_Node_0" IPAddressOrFQDN="192.1.3.50" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
Coming for v5.6 Release
In the next release we have added a DNS server layered on top of the Naming Service, so that you can use DNS names in place of fabric:/ names used by the reverse proxy. This means that within the Service Fabric cluster you can call between the containers with http://[domainname]/path calls instead. Same rules apply for the calls between local containers on the same machine as for the reverse proxy.

Using Service affinity would be an option: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-affinity

Related

How can I diagnose a connection failure to my Load-balanced Service Fabric Cluster in Azure?

I'm taking my first foray into Azure Service Fabric using a cluster hosted in Azure. I've successfully deployed my cluster via ARM template, which includes the cluster manager resource, VMs for hosting Service Fabric, a Load Balancer, an IP Address and several storage accounts. I've successfully configured the certificate for the management interface and I've successfully written and deployed an application to my cluster. However, when I try to connect to my API via Postman (or even via browser, e.g. Chrome) the connection invariably times out and does not get a response. I've double checked all of my settings for the Load Balancer and traffic should be getting through since I've configured my load balancing rules using the same port for the front and back ends to use the same port for my API in Service Fabric. Can anyone provide me with some tips for how to troubleshoot this situation and find out where exactly the connection problem lies ?
To clarify, I've examined the documentation here, here and here
Have you tried logging in to one of your service fabric nodes via remote desktop and calling your API directly from the VM? I have found that if I can confirm it's working directly on a node, the issue likely lies within the LB or potentially an NSG.

Load balancer for Azure Service Fabric Cluster on-premises

As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.
We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.
I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.
It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.
What other options do we have?
Install Network Load Balancer Feature on each of the virtual machine to give the cluster a single IP address, is this even possible? Something like https://www.poweradmin.com/blog/configuring-network-load-balancing-in-windows-server/
Suggest an API gateway outside the cluster, like KONG https://getkong.org/
Something else ?
PS: The client applications do not send many requests per second, maybe a few per minute.
Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IIS
If you do not have possibility to use IIS then look at Using nginx as HTTP load balancer
You don't need another machine just for HTTP forwarding. Just use/run it as a service on the cluster.
Did you consider using the built in Reverse Proxy of Service Fabric? This runs on all nodes, and it will forward http calls to services inside the cluster.
You can also run nginx as a guest executable or inside a Container on the cluster.
We have also faced the same situation when started working with service fabric cluster. We configured Application Gateway as Proxy but it would not provide the function like HTTP to HTTPS redirection.
For that, we configured Nginx Instead of Azure Application Gateway as Proxy to Service Fabric Application.

Scaling Azure Container Service with private ports on containers

In our organization, we are currently trying out the Azure Container Service with Docker Swarm. We have developed a Web API project based on .NET Core and created containers out of it. We have exposed the web api on Container’s Private Port (3000). We want to scale this to say 15 containers on three agent nodes while still accessing the web api through one single Azure load balancer url on public port 8080.
I believe we would need an Internal Load Balancer to do this but there is no documentation around it. I have seen this article on DC\OS but we are using Docker Swarm here. Any help?
Azure Container Service use vanilla Docker Swarm so any load balancing solution for Swarm will work in ACS, e.g. https://botleg.com/stories/load-balancing-with-docker-swarm
Same is true for DC/OS, but in this case it is documented in "Load balance containers in an Azure Container Service cluster" - https://azure.microsoft.com/en-us/documentation/articles/container-service-load-balancing/

Link containers in Azure Container Service with Mesos & Marathon

I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.

How to properly configure the DNS configuration of the VirtualBox to resolve docker containers hostnames within the local network?

Here is the context
I have three container :
Container 1 : REST Api
Container 2 : Web application a.k.a "The Dashboard"
Container 3 : The database
My Goal
I want this stack to run on top of Mac OSX or Windows in order to build a coherent application accessible from the local network.
What I need - DNS Configuration
When the web application is served by the Container 2 to any client on the local network, the browser need to communicate with the REST API running on the Container 1.
I would like to be able to setup within the web application the hostname of the Container 2 e.g. server-api.local

Resources