Need to run a docker image in azure virtual machine scale sets - azure

I have a docker image, I want to run it in azure VMSS . It should be scaled automatically (custom scaling based on metrics). Need some help on how to achieve this..
I can create a VM, install docker and run docker image there. But not sure how to do the same in VMSS. Do we need to get into the instances of VMSS and install docker and run docker image there in each VM? If so, how it will work when it scaled out to new instances..

It seems that there is an autoscale mechanism in place: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview

I wish I had a reliable answer.
I created the base VM, modified /etc/docker/daemon.json so it sets data-root to run in /mnt/docker, which would have tons of space.
tested out docker there using docker version 20.10.12. docker pull worked, docker run, ps -a, docker rm, docker image ls, docker system prune, all worked.
generalized vm, created image, hosted it in compute gallery, built a VMSS from the image version in the compute gallery.
Logged into a running instance.
successfully did a docker pull. but on docker run (same command I used in base VM), and poof, kernel panic, not syncing error.
but then, later on, after googling around (some said out-of-diskspace (not the problem)), found this site, tried to reproduce, and poof, it worked, docker run worked. I changed nothing.
Which kinda sucks. I built the VMSS for Azure Devops build/release pipelines. having it sometimes work (or fail the first time, dunno yet) is not a good solution.

Related

How do i create a custom docker image?

I have to deploy my application software which is a linux based package (.bin) file on a VM instance. As per system requirements, it needs minimum 8vCPUs and 32GB RAM.
Now, i was wondering if it is possible to deploy this software over multiple containers that load share the CPU and RAM power in the kubernetes cluster, rather than installing the software on a single VM instance.
is it possible?
Yes, it's possible to achieve that.
You can start using docker compose to build your customs docker images and then build your applications quickly.
First, I'll show you my GitHub docker-compose repo, you can inspect the folders, they are separated by applications or servers, so, one docker-compose.yml build the app, only you must run a command docker-compose up -d
if you need to create a custom image with docker you should use this docker command docker build -t <user_docker>/<image_name> <path_of_files>
<user_docker> = your docker user
<image_name> = the image name that you choose
<path_of_files> = somelocal path, if you need to build in the same folder you should use . (dot)
So, after that, you can upload this image to Dockerhub using the following commands.
You must login with your credentials
docker login
You can check your images using the following command
docker images
Upload the image to DockerHub registry
docker push <user_docker>/<image_name>
Once the image was uploaded, you can use it in different projects, make sure to make the image lightweight and usefully
Second, I'll show a similar repo but this one has a k8s configuration into the folder called k8s. This configuration was made for Google cloud but I think you can analyze it and learn how you can start in your new project.
The Nginx service was replaced by ingress service ingress-service.yml and https certificate was added certificate.yml and issuer.yml files
If you need dockerize dbs, make sure the db is lightweight, you need to make a persistent volume using PersistentVolumeClaim (database-persistent-volume-claim.yml file) or if you use larger data onit you must use a dedicated db server or some db service in the cloud.
I hope this information will be useful to you.
There are two ways to achieve what you want to do. The first one is to write a dockerfile and create the image. More information about how to write a dockerfile can be found from here. Apart for that you can create a container from a base image and install all the software and packages and export it as a image. Then you can upload to a docker image repo like Docker Registry and Amazon ECR

Cleaning up old Docker images on Service Fabric cluster

I have a Service Fabric cluster with 5 Windows VMs on it. I deploy an application that is a collection of about 10 difference containers. Each time I deploy, I increment the tag of the containers with a build number. For example:
foo.azurecr.io/api:50
foo.azurecr.io/web:50
Our build system continuously builds each service, tags it, pushes it to Azure, increments all the images in the ApplicationManifest.xml file, and then deploys the app to Azure. We probably build a new version a few times a day.
This works great, but over the course of a few weeks, the disk space on each VM fills up. This is because each VM still has all those old Docker images taking up disk space. Looking at it right now, there's about 50 gigs of old images sitting around. Eventually, this caused the deployment to fail.
My Question: Is there a standard way to clean up Docker images? Right now, the only idea I have is create some sort of Windows Scheduler task that runs a docker image prune --all every day or something. However, at some point we want to be able to create new VMs on the fly as needed so I'd rather each VM be a "stock" image. The other idea would be to use the same tag each time, such as api:latest and web:latest. However, then I'd have to figure out a way to get each VM to issue a docker pull command to get the latest version of the image.
Has anyone solved this problem before?
You can configure PruneContainerImages to True. This will enable the Service Fabric runtime to remove the unused container images. See this guide

Disable logs for existing containers in docker

I am new to docker and am running an Ubuntu container on Arch linux. I use it for debugging and building software with an older version of gcc. I was running low on disk and stumbled upon logs which I was able to truncate. I don't need the logs but don't want to loose my existing container that I created some time back. The solutions I have come across (disable through drivers or set rotate size to 0m) are in my understanding applied to create new containers, but I want to apply them to existing one.
You can create an image of that container with docker commit, remove the container with docker rm and then use --log=none option to docker run.
If you're new to Docker, consider that it's best to use ephemeral containers of a given image. You can also maintain a Dockerfile to recreate that image with docker build.

Persisting content across docker restart within an Azure Web App

I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.

Ship docker image as an OVF

I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.

Resources