Simple way to copy files onto Windows-based Azure container instance - azure

Azure Files volume mounting is not supported in Windows containers.
I'm aware I can use AzCopy with Azure Files, but I was wondering if there was a simpler way that doesn't involve creating an Azure storage resource. Because I would have the added work of maintaining the creation/teardown of these storages.
Ideally, I would like the host agent (running create container), to simply copy the files directly to the container instances, therefore the files are tied to the execution of the hosting agent.

As I know there is no way to copy files to the Windows-based Azure container instance except the command. The AzCopy command is OK. It's impossible that you want to do something on the host agent. You can do nothing with the ACI host agent. Additionally, the ACI is more suitable for a quick test and running of the images.
If you want to copy files and other controls on the containers, I recommend the AKS. You can run the Windows-based containers in the AKS with Windows nodes, and the Azure File volume is also available for the Windows containers. See the information here.

Related

Create an Azure VM copy locally and deploy it on a different instance

I want to create a baseline copy of Azure VM, Install all SQL and some 3rd party software I needed to it and create a copy / backup of it locally.
This copy / backup can be re-deployed again to a new VM for a new client.
currently I'm spending 1 to 2 days setting up this VM, installing and configuring this server.
How do I do it ?
Use any configuration management tools Ansible, Puppet, Chef...
Docker is the simplest thing I can think of. Upload your own image to a registry and then spawn as many VMs using it as you want.
See this for some pointers: Deploy image from Azure Container Registry to an Azure Linux Virtual Machine
Ansible and other CM tools will give you more functionality for more effort.

Access Azure Virtual Machine File Structure from outside the VM

We have Windows Server 2016 Azure Virtual Machines using managed disks.
I am trying to create an Azure Data Factory pipeline that will let me copy certain files from a folder on the hard drives of those VMs, to our Azure SQL Server. I was quite surprised to see no ADF connectors available for Azure VMs; then I checked Logic Apps - same issue, no available connectors for connecting to Azure VM's there either.
Then I did some Googling to find out how, in general, you can access an Azure VM file structure from outside (without using Remote Desktop) and was even more surprised to see that there isn't any info out there about this (not even that it can't be done).
Is it possible for me to access the file system of my Windows Server 2016 Azure VM without using Remote Desktop? The VM's are running Managed Disks if that makes any difference.
You can either ssh your_vm_ip and then use rsync command to download or upload files.
rsync -au --progress your_user_name#ip.ip.ip.ip:/remote_dir/remote_dir/ /local_dir/local_dir/
Otherwise you can install Dropbox in the VM and your local computer, transfering small files in the shared Dropbox folder is very fast..
Here are some instruction slides on the Azure storage system and their Storage Explorer App.

Azure Web Apps For Containers persistent storage

I am trying to build two different services which will be running on Azure Web Apps for Containers. I am creating docker images and storing it in Azure Container Registry. I want to share single persistent storage between these two services. I understood from blogs that you can mount /home directory but could not be shared between two services.
There is plugin for docker Cloudstor, I can create the volume but not sure how we can utilize this generated volume in Web Apps For Containers. The app service runs the command for docker, does anybody know how we can use the volume created using the plugin?
In My Opinion the webapps for containers should not be there. I think it is better to get a docker host machine as vm and then work with normal docker features. this is also the way microsoft describes in their docs for multi-docker szenarios. https://learn.microsoft.com/de-de/azure/virtual-machines/linux/docker-compose-quickstart
Things that should microsoft do:
give kudo a proper docker cli
map storages to docker volumes via azure dashboard/azure cli
Create a storage account and mount Fileshare into the docker image somewhere under /home
This will be easiest if the two service instances are in the same resource group as the storage account.
What is your reason for sharing a single storage instance?
Without experimenting I can't guarantee the same storage container can be shared between two app services. Depends on your needs. I expect two containers in the same storage account can be mounted into your two docker images.
Without knowing a little more this is the most I can contribute. All the best.

Run docker container on azure

I have a simple docker container which runs just fine on my local machine. I was hoping to find an easy checklist how I could publish and run my docker container on Azure, but couldn't find one. I only found https://docs.docker.com/docker-for-azure/, but this document kind of leaves me alone when it comes to actually copy my local docker container to Azure. Isn't that supposed to be very easy? Can anybody point me in the right direction how to do this?
But it is really easy.. once you know where to find the docs :-). I would take the azure docs as a starting point as there are multiple options when it comes to hosting containers in Azure:
If you're looking for this...
Simplify the deployment, management, and operations of Kubernetes -> Azure Container Service (AKS)
Easily run containers on Azure with a single command -> Container Instances
Store and manage container images across all types of Azure deployments
-> Container Registry
Develop microservices and orchestrate containers on Windows or Linux
-> Service Fabric
Deploy web applications on Linux using containers
-> App Service
Based on your info I would suggest storing the image using the Azure Container Registry and host the container using Azure Container Instances. No need for a VM to manage this way.
There is an excellent tutorial you could follow (I skipped the first 1 step since it involves creating a docker image, you already have one)
Another complete guide of pushing your image to azure and create a running container can be found here.
The good thing about Azure Container Instances is that you only pay for what you actually use. The Azure Container Registry is a private image repository hosted in Azure, if course you could also use Docker Hub but using ACR makes it all really simple.
In order to run an image, you simply need to configure a new VM with the Docker Daemon. I personally found Azure's documentation to be pretty complex. Assuming you are not trying to scale your service across instances, I would recommend using docker-machine rather than the Azure guide.
docker-machine is a CLI tool published by the Docker team which automatically installs the Docker Daemon (and all the dependencies) on a host. So all you would need to do is input your Azure subscription and it will automatically create a VM configured appropriately.
In terms of publishing the image, Azure is probably not the right solution. I would recommend one of two things:
Use Docker Hub, which serves as a free hosted Docker image repository. You can simply push images to Docker Hub (or even have them built directly from your Git repository).
Configure a CD tool, such as TravisCI or CircleCI, and use these to build your image and push directly to your deployment.
To run your docker image inside ACI, You can use of Azure Container Registry.
Step0: Create Azure Container Registry
Step1: Include a Dockerfile in your application code
Step2: Build the code along with the Dockerfile with a tag and create a
Docker image ( docker build -t imagename:tag .)
Step3: Push the Docker image to Azure container Registry with a image name and tag.
Step4: Now create a ACI, while creating, choose the image type as private, provide the image name, tag, image registry login server, image registry username, image registry password ( these details can be found under access keys tab inside Azure Container Registry)
Step5: choose running os as linux, in network step you can give an dns name for your ACI, then click on review & create
Step6: once ACI gets created you can go to overview and you can see fqdn, using fqdn you can access your application running inside Azure Container Instance.

Mount a volume while using a docker container in Azure App Service

I've deployed a Web App on Azure and use a Docker Container from the public registry (my own image) to host my website. But users can upload pictures and data is stored in json-files on the server. Of course I want to write these files to a mounted volume outside of the container. So that I can redeploy an update version of my website without losing data.
Is that possible with Web Apps? Or do I need to move on to an Ubuntu VM with Docker on Azure? What I like about the webapps is I don't have to worry about managing the VM and only care about my container.
This blog post is a great start and understanding Azure's strategy regarding volume mounting (ASL == App Services on
Linux; ASW=App Services on Windows):
... However, in this case, we would like to leverage the regular App Service Filesystem, so we can interact with the application using FTP. When a container is deployed, ASL mounts the equivalent of D:\home path on ASW to /home (using volume mount in Docker). Now when that happens, it is up to your container to map the corresponding paths into the application. In order to understand how this works more closely, take a look at the official Dockerfile used in PHP7 container on ASL.
https://hajekj.net/2016/12/25/building-custom-docker-images-for-use-in-app-service-on-linux/

Resources