I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.
Related
We are pretty new to AWS and looking to deploy multiple services into one EC2 instance.
Each micro-service is developed in its own repository.
Each service will have its own endpoint URL
Services may talk to each other
Services can be updated/deployed separately
Do we need a beanstalk for each? I hope not.
Thank you in advance
So the way we tackled a similar issue at our workplace was to leverage the multi-container docker platform supported by Elastic Beanstalk in most AWS regions.
The way this works in brief is, we had dedicated repositories for each of our services in ECR (Elastic Container Registry) where the different "versioned" images were deployed using a deploy script.
Once that is configured and set up, all you would need is deploy a Dockerrun.aws.json file which basically highlights all the apps you would want to deploy as part of the docker cluster into 1 EC2 instance (make sure it is big enough to handle multiple applications). This is the file where one would also highlight link between applications (so they can talk to one another), port configurations, logging drivers and groups (yea we used AWS CloudWatch for logging) and many other fields. This JSON is very similar to one's docker-compose.yml which is used to bring up your stack for local development and testing.
I would suggest checking out the sample example configuration that Amazon provides for more information. Also, I found the docker documentation to be pretty helpful in this regard.
Hope this helps!!
It is not clear if you have a particular tool in mind. If you are using any tool for deployment of a single micro-service, multiple should be the same.
How does one deploy multiple micro-services in Node on a single AWS
EC2 instance?
Each micro-service is developed in its own repository.
Services can be updated/deployed separately
This should be the same as deployment of a single micro-service. As long as they have different path and port that they are running on, it should be fine.
Each service will have its own endpoint URL
You can use nginx as a reverse proxy which can redirect your request from port 80 to the required port of your micro service.
Services may talk to each other
This again should not be an issue. You can either call them directly with the port number or via fully qualified name and come back via nginx.
I am setting up a multi container application on mesos cluster on Azure using azure container service and currently stuck in linking containers.
My setup brief info:
- Mesos cluster is deployed on Azure using Azure container service
- It's a 3 container application - A, B and C
- B is dependent on A and C is dependent on A & B-
- A is deployed currently
How can I link the above containers?
Thanks,
Suraj
If by linking you mean Docker's --link then thats deprecated practice and inter-container communication should be done using Docker networks and port mappings.
For DC/OS - you have some different ways to achieve this (also called Service Discovery). I have written a blog post explaining these different tools by examples: http://blog.itaysk.com/2017/04/28/dcos-service-discovery-and-load-balancing-by-example
If you don't want to read through that long post and looking for a recommendation: Try using VIPs.
When creating the application (either from Marathon or DC/OS UI), look for the 'VIP' setting. Enter an IP there (it can be a made up IP) and port. Your service will be discoverable under this IP:Port.
More on VIPs: https://dcos.io/docs/1.9/networking/load-balancing-vips/virtual-ip-addresses/
I have created an Azure Service Fabric Cluster using "Windows 2016 DataCenter with Containers" OS and enabled Reverse Proxy listening on Port 80 on the cluster. We intend to deploy our legacy ASP.NET MVC & WCF applications on this cluster.
In our existing deployment model we have services co-located on same host communicating to each other because of chatty communication and low latency requirements. Is it possible for apps hosted in windows container to communicate with other windows container apps on the same Node? Basically I would like to have Node affinity between two windows container apps on the same Service Fabric Node.
I tried it with a sample application and it looks like the only option is to use Private IP of container which is getting dynamically assigned. Is it possible to pass --ip-address parameter while instantiating containers on Service Fabric cluster?
[Update: 04/05/2017]
Service Fabric Cluster Version: 5.5.219.0
Total NodeTypes: 1
Total Nodes: 3
Both containers are deployed on all three nodes. Azure internet load balancer is used to expose Service Fabric Reverse Proxy. All services are accessing internally as well as externally via reverse proxy.
Gaurav
There is what you can do now in the v5.5 release and what is coming in the next release.
For v5.5 Release
This GitHub repo https://github.com/Azure-Samples/service-fabric-dotnet-containers shows how to communicate between Windows Containers where you can map a host port to the private port on the container using PortBinding policy in the Application manifest.
You can then communicate either called the Naming Service over REST (this is what this sample does) or as you say and is easier, with the ReverseProxy which has a URL format. If both containers are deployed on the same machine then the communication is via the local ReverseProxy and you get local calls between the containers. See https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy. As Mikkel points out, you have to tell Service Fabric that you want the containers on the same VM, through placement constraints.
Note: for the GitHub sample above to work on your local machine you need to update the local dev manifest ClusterManifestTemplate.xml in the C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\FiveNode folder and change the IPAddressOrFQDN attribute from "localhost" to the actual IP of the machine it is running on. This is due to a Windows Server Network bug, documentated in the git repo above. i.e
<Node NodeName="_Node_0" IPAddressOrFQDN="ComputerFullName" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
becomes
<Node NodeName="_Node_0" IPAddressOrFQDN="192.1.3.50" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
Coming for v5.6 Release
In the next release we have added a DNS server layered on top of the Naming Service, so that you can use DNS names in place of fabric:/ names used by the reverse proxy. This means that within the Service Fabric cluster you can call between the containers with http://[domainname]/path calls instead. Same rules apply for the calls between local containers on the same machine as for the reverse proxy.
Using Service affinity would be an option: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-affinity
I'm using Azure for the first time to deploy a web app. More specifically, I'm using the Docker Container Service to do this. I already have one instance of it deployed. But, I want to also deploy 2 more instances of the same web app. I want each instance to have a different URL. What is the best way of doing this? Do I have to add a new container service for each new instance and repeat the steps I did for deploying the first instance?
As in case of Azure Container Service (ACS),
You choose a container orchestrator i.e. Swarm, DC\OS and Kubernetes (Because creating replicas works differently in each of them).
In case of Swarm then either create a separate application container for the same application with a different end point and use automatic discovery feature for different URL(s) or choose a reverse-proxy load-balancer such as Nginx which will work on the same URL but on a different port (My blog here might help you).
Creating replicas across a cluster are used for Load Balancing, Routing Mesh, Scaling and Rolling updates etc.
I install the azure plugin for elastic search according to this tutorial.
Azure Elastic which is using the template from here
github.com/Azure/azure-quickstart-templates/tree/master/elasticsearch
After it is deployed, I am able to connect to the kabana from the tutorial link above. If I like to install security for the Azure Elastic Search, how would be possible?
Furthermore, how do I access the elasticsearch.yaml for the config to further customisation ?
I tried to access the VM but there are only two ip i can link from the azure portal which is the jumpbox and also the kabana public ip.
Tried searching the /etc/ folder but didnt get to see the elastic folder after I remote into the server.
Please see this photo for the IP in Azure Portal.
I am also very new into ARM (Azure Resource Manager) which now consists multiple nodes of server connected together. It would be best , if someone could help explain how elastic search install into here. As far as I know the master node will proper assign the task to the data node after the request is handled at the client node.
The Elastic version is v2.3.1
Please help.
Once you install use the quickstart to install your cluster (of a single node it sounds like), you are in complete control.
In the case of the template, the jumpbox exists as an access point to pivot into the rest of the cluster. This way you can avoid ever giving your Elasticsearch instances a public IP address, thereby reducing the chance for a driveby attack to take place on your cluster -- because it's never exposed! For what it's worth, this is a pretty common strategy in operational isolation.
So, to get started, you should be able to SSH into the jumpbox, and from there you can use the private address of the Elasticsearch VM to SSH to it, from the jumpbox.
SSH into jump box
SSH into the rest of the private VMs
Once you have done that, then you should be able to access the elasticsearch.yml file.
How do you add security? The only official way to install security in Elasticsearch is to use the Shield plugin. This allows you to encrypt communication to/from Elasticsearch, as well as provide authentication.
Elastic, the company behind Elasticsearch and Kibana, has its own Azure Quick Start for Elasticsearch that does most of what the template you used does, but it also adds security to it. It may prove to be easier to delete the old cluster and start one from there.