Docker swarm unresponsive after scaling VMSS in Azure - azure

I've got a web app running on a Docker swarm in Azure that was set up through the Docker EE For Azure (17.06) bootstrapper. As the website isn't currently processing a large amount of traffic, I scaled the VM size down to the cheapest option (A0). After scaling these down website was unresponsive.
I SSHed on to a manager node and typing commands was slow. Figuring that I'd scaled down too much I scaled back up to the previous tier I had been on (D1_v2).
My website remained unresponsive so I SSHed onto a manager again. The terminal was responsive, however docker commands such as docker service ls and docker node ls do nothing. The VM in general seems to be working fine, however. I can cat files, run docker version etc.
Does anyone have any ideas why this may have happened? Is there any way I can fix it, or is my best option to just provision a new environment?
Thanks

I contacted Azure Support to see if they could provide any further assistance with this. They advised scaling the number of VMs in my Scale Set which I couldn't do because we had reached our allocated limit in that zone.
Azure were unable to provide any information as to how this may have happened and recommended provisioning a new environment as the best solution.

Related

Correct container technolgy on azure for long running service

I want to run a docker container which hosts a server which is going to be long running (e.g. 24x7).
Initially I looked at Azure Container Instances (ACI) and whilst these seems to fit the bill perfectly I've been advised they're not designed for long running containers, also they can prove to be quite expensive to run all the time compared to a basic VM.
So I've been looking at what else I should run this as:
AKS - Seems overkill for just one docker container
App Service for containers - my container doesn't have an http endpoint so I believe I will have issues with things like health checks
VM - this seems all a bit manual as I'd really like not to deal with VM maintenance and I'm also unsure I can use CI/CD techniques to build / spin up-down / do releases on a VM image (we're using terraform to deploy infra).
Are there any best practise guides on this, I've tried searching but I'm not finding anything relevant, I'm assuming I'm missing some key term to get going with this!
TIA
ACI is not designed for long-running (uninterrupted) processes have a look here
Recommendation is to use AKS where you can fully manage lifecycle of your machines or just use VMs

Cleaning up old Docker images on Service Fabric cluster

I have a Service Fabric cluster with 5 Windows VMs on it. I deploy an application that is a collection of about 10 difference containers. Each time I deploy, I increment the tag of the containers with a build number. For example:
foo.azurecr.io/api:50
foo.azurecr.io/web:50
Our build system continuously builds each service, tags it, pushes it to Azure, increments all the images in the ApplicationManifest.xml file, and then deploys the app to Azure. We probably build a new version a few times a day.
This works great, but over the course of a few weeks, the disk space on each VM fills up. This is because each VM still has all those old Docker images taking up disk space. Looking at it right now, there's about 50 gigs of old images sitting around. Eventually, this caused the deployment to fail.
My Question: Is there a standard way to clean up Docker images? Right now, the only idea I have is create some sort of Windows Scheduler task that runs a docker image prune --all every day or something. However, at some point we want to be able to create new VMs on the fly as needed so I'd rather each VM be a "stock" image. The other idea would be to use the same tag each time, such as api:latest and web:latest. However, then I'd have to figure out a way to get each VM to issue a docker pull command to get the latest version of the image.
Has anyone solved this problem before?
You can configure PruneContainerImages to True. This will enable the Service Fabric runtime to remove the unused container images. See this guide

Can containers in Swarm Mode be automaticaly raised when the Load is high?

So we are getting started with Docker Containers and Swarm Mode on Windows. Currently, we have installed Docker for Windows, enabled the Swarm mode in Single-Node Mode, Scaled sevices etc.
Now we are looking, if there is a way to automaticaly create new containers, on the node(s) when the load, on the existing containers, is high.
There is a way to monitor the load on the Nodes, for example if the memory on the Node is high. I'm aware of the fact that there can be some automated Nodecreation, that will host new containers, if the load on the Nodes is high, refering to this example of doing this. But is there a way to monitor containers on a container host/swarm?
We are aiming to host a WebApp running on a microsoft/iis image which currently works fine. But we wanted to know if there is a way to handle the possible incoming load without this leading the system to fail, or having to manualy create new containers.
The current enviroment is a local test VM on our servers and the goal is to do all this stuff in a MS Azure VM running also Windows Server 2016.
Also what would you suggest as a tool/solution for creating traffic load on a website? Somehow we will have to test the whole concept.
I also wanted to add that I am a newbie with Docker and Swarm mode, so there might be a posibility that I'm putting this in words down the wrong way.
Any suggestions will be appreciated! :)

Ship docker image as an OVF

I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.

Openstack horizon issues not starting up

I have installed a openstack sigle mode in my vm, but when I restart the vm I cant log in to the horizon web page.
Any ideas?
thanks.
Looks like you are using devstack, right? When you restart your VM, some of devstack services are failing, so Horizon cannot connect to these services.
You should go to the screen sessions and go through all windows to restart all services.
Now so long ago you were able to run ./rejoin_stack.sh, but is was removed because it does not do this job right (look like).
So I believe it would be better for you to run ./unstack.sh and stack.sh again.
As said in the same question
DevStack is not meant and should not be used for running a cloud.

Resources