Scaling Azure Container Service with private ports on containers - azure

In our organization, we are currently trying out the Azure Container Service with Docker Swarm. We have developed a Web API project based on .NET Core and created containers out of it. We have exposed the web api on Container’s Private Port (3000). We want to scale this to say 15 containers on three agent nodes while still accessing the web api through one single Azure load balancer url on public port 8080.
I believe we would need an Internal Load Balancer to do this but there is no documentation around it. I have seen this article on DC\OS but we are using Docker Swarm here. Any help?

Azure Container Service use vanilla Docker Swarm so any load balancing solution for Swarm will work in ACS, e.g. https://botleg.com/stories/load-balancing-with-docker-swarm
Same is true for DC/OS, but in this case it is documented in "Load balance containers in an Azure Container Service cluster" - https://azure.microsoft.com/en-us/documentation/articles/container-service-load-balancing/

Related

How to build self-made Azure App Service cluster with docker swarm

Since Azure App Services are too expensive, we would like to build our own App Service Cluster so we can deploy Docker images to custom cloud VMs/workers from different service providers.
It should cover functionalities like:
Deployment Center: selecting Docker images (Docker hub, gitlab) and deploy them to our cloud workers by tag
Scalability (like App Service Plan): add workers
Load balancing
SSL/Domain Management
FW Integration
Logging Streams
Does there exist already an open source framework with a GUI for this type of "private cloud" that can be deployed via docker swarm for instance?
thankz!

Difference between Azure Container Service and Web App for Containers

What is the difference between Azure Container Service and Web App for Containers?
They both seem to offer a fully managed platform on which we can deploy containers. I feel that Web App for Containers must be offering something more, but I don't see it. I've read the Azure Container Service FAQ and the Web App for Containers intro page, but the difference is not obvious to me.
Web App for Containers lets you run your custom Docker container which hosts your Web Application. By default the Web App Service with Linux OS provides built-in Docker images like PHP 7.0 and Node.js 4.5. But by following the instructions from this webpage you can also host your custom docker images which allows you to define your own SW-Stack. The limitation is that you can only deploy one docker image to an App Service. You can scale the App Service to use multiple instances, but each instance will have the same docker image deployed. So this allows you to use Docker as a Service, but isn't intended for deploying Microservices.
Container Services (ACS), Kubernetes Service (AKS) and Service Fabric allow you to deploy and manage multiple (different) Docker containers which might also need to communicate with each other. Let's say you implement a shopping website and want to build your web application based on a Microservices architecture. You end up having one Service (= container) which is used for registration & login of users and another Service which is used for the visitors' shopping carts and purchasing items. Additionally you have many further small services for all the other needed tasks. Because the purchasing service is used more frequently than the sign-up/sign-in service, you will need, for example, 6 instances of the sign-up/sign-in service and 12 instances of the cart service. Basically, ACS, AKS and Service Fabric let you deploy and manage all those different Microservices.
If you want to know the difference between ACS/AKS and Service Fabric you might want to have a look here.

How can I link docker containers in mesos cluster (dc/os) running on Azure?

I am setting up a multi container application on mesos cluster on Azure using azure container service and currently stuck in linking containers.
My setup brief info:
- Mesos cluster is deployed on Azure using Azure container service
- It's a 3 container application - A, B and C
- B is dependent on A and C is dependent on A & B-
- A is deployed currently
How can I link the above containers?
Thanks,
Suraj
If by linking you mean Docker's --link then thats deprecated practice and inter-container communication should be done using Docker networks and port mappings.
For DC/OS - you have some different ways to achieve this (also called Service Discovery). I have written a blog post explaining these different tools by examples: http://blog.itaysk.com/2017/04/28/dcos-service-discovery-and-load-balancing-by-example
If you don't want to read through that long post and looking for a recommendation: Try using VIPs.
When creating the application (either from Marathon or DC/OS UI), look for the 'VIP' setting. Enter an IP there (it can be a made up IP) and port. Your service will be discoverable under this IP:Port.
More on VIPs: https://dcos.io/docs/1.9/networking/load-balancing-vips/virtual-ip-addresses/

Azure Service Fabric Windows Containers - Inter Container Communication

I have created an Azure Service Fabric Cluster using "Windows 2016 DataCenter with Containers" OS and enabled Reverse Proxy listening on Port 80 on the cluster. We intend to deploy our legacy ASP.NET MVC & WCF applications on this cluster.
In our existing deployment model we have services co-located on same host communicating to each other because of chatty communication and low latency requirements. Is it possible for apps hosted in windows container to communicate with other windows container apps on the same Node? Basically I would like to have Node affinity between two windows container apps on the same Service Fabric Node.
I tried it with a sample application and it looks like the only option is to use Private IP of container which is getting dynamically assigned. Is it possible to pass --ip-address parameter while instantiating containers on Service Fabric cluster?
[Update: 04/05/2017]
Service Fabric Cluster Version: 5.5.219.0
Total NodeTypes: 1
Total Nodes: 3
Both containers are deployed on all three nodes. Azure internet load balancer is used to expose Service Fabric Reverse Proxy. All services are accessing internally as well as externally via reverse proxy.
Gaurav
There is what you can do now in the v5.5 release and what is coming in the next release.
For v5.5 Release
This GitHub repo https://github.com/Azure-Samples/service-fabric-dotnet-containers shows how to communicate between Windows Containers where you can map a host port to the private port on the container using PortBinding policy in the Application manifest.
You can then communicate either called the Naming Service over REST (this is what this sample does) or as you say and is easier, with the ReverseProxy which has a URL format. If both containers are deployed on the same machine then the communication is via the local ReverseProxy and you get local calls between the containers. See https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy. As Mikkel points out, you have to tell Service Fabric that you want the containers on the same VM, through placement constraints.
Note: for the GitHub sample above to work on your local machine you need to update the local dev manifest ClusterManifestTemplate.xml in the C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\FiveNode folder and change the IPAddressOrFQDN attribute from "localhost" to the actual IP of the machine it is running on. This is due to a Windows Server Network bug, documentated in the git repo above. i.e
<Node NodeName="_Node_0" IPAddressOrFQDN="ComputerFullName" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
becomes
<Node NodeName="_Node_0" IPAddressOrFQDN="192.1.3.50" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
Coming for v5.6 Release
In the next release we have added a DNS server layered on top of the Naming Service, so that you can use DNS names in place of fabric:/ names used by the reverse proxy. This means that within the Service Fabric cluster you can call between the containers with http://[domainname]/path calls instead. Same rules apply for the calls between local containers on the same machine as for the reverse proxy.
Using Service affinity would be an option: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-affinity

Azure Docker Container Service Multiple Instances

I'm using Azure for the first time to deploy a web app. More specifically, I'm using the Docker Container Service to do this. I already have one instance of it deployed. But, I want to also deploy 2 more instances of the same web app. I want each instance to have a different URL. What is the best way of doing this? Do I have to add a new container service for each new instance and repeat the steps I did for deploying the first instance?
As in case of Azure Container Service (ACS),
You choose a container orchestrator i.e. Swarm, DC\OS and Kubernetes (Because creating replicas works differently in each of them).
In case of Swarm then either create a separate application container for the same application with a different end point and use automatic discovery feature for different URL(s) or choose a reverse-proxy load-balancer such as Nginx which will work on the same URL but on a different port (My blog here might help you).
Creating replicas across a cluster are used for Load Balancing, Routing Mesh, Scaling and Rolling updates etc.

Resources