Mount BLOB Container as a directory in AppService - azure-web-app-service

We have an Angular application running on AppService (Windows) and there are large volume of SCORM packages on BLOB storage. To launch the SCORM package, we need to iframe the launcher URL(a BLOB URL) inside an Angular component. The SCORM launcher html file has few JS method which we need to access from Angular component. When trying to access the JS method from Angular, we are getting Cross origin error.
One option is to place all the SCORM packages under the same AppService root folder so that both Angular application and SCORM packages will be under same domain. But since these SCORM packages run into hundreds of GB, it is best to place it in storage rather in root folder.
So is there any way to mount a BLOB container as a drive in AppService so that when accessing the JS methods in Angular it will not result in Cross origin error?
Note: We don't follow container deployment but direct deployment to AppService.

As I have already mentioned in the comment , It is not possible to mount AzureBlob in a App Service of kind:Windows , you can only use Azure Files as mentioned in the Microsoft Document in limitations section. If it was of kind:linux then you could have mounted both AzureBlobs and Azure Files.
I have already tested for the same thing while creating a App service of windows and mounting the azure blob container using terraform in this SO thread and as you can find the output there it will error out.
So , as a Solution you can create a Azure File Share in the storage account and mount the same in the app service.
You can refer the below documents :
Create Fileshare
Mount Fileshare in Windows App Serice

Related

Access VM Shared Directory from Linux App Service

we have the new asp.net core web application running on Azure as App Service.
Because of the backward compatibility, we have a bunch of files (from the old version of the application) stored on VM Windows machine running at Azure too. Those files must be there!
And we need to access them from Linux App Service as files and directories as they are.
We wanted to use File Share. But because of the App Service sandbox, it is not possible.
Any help?
As of now you have option of mounting or using azure storage with Linux App Service.
https://learn.microsoft.com/en-us/azure/app-service/configure-connect-to-azure-storage?tabs=portal&pivots=container-linux
You can think of using or moving your filesystem from Azure VM to azure storage and further use Linux App Service mounting to Azure Storage.
The above article contains video with every step on how to do that.

How to install Function App Bindings on your own Docker Image

I'm trying to setup an Azure Function App with my own Docker image (as per https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=nodejs)
But I can't figure out how to install an Extension (e.g. CosmosDBTrigger, as per https://learn.microsoft.com/fr-fr/azure/azure-functions/install-update-binding-extensions-manual)
Is it possible ? Thanks for your help.
If you want to add it to your project in Visual Studio, use package manager:
Install-Package Microsoft.Azure.WebJobs.Extensions.CosmosDB -Version 3.0.4
Ensure your .csproj file has this package reference in it:
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="3.0.4" />
If you need to add it manually, you can do so through Kudu which you access via https://[your-func-hostname].scm.azurewebsites.net - Full instructions here
Adding manually to a deployed docker container can only be achieve when using persistent storage.
You can use an app setting called WEBSITES_ENABLE_APP_SERVICE_STORAGE
to control whether or not the /home directory of your app is mapped to
Azure Storage. If you need files to be persisted in scale operations
or across restarts, you should add this app setting and set it to
"true". If you don't require file persistence, you can set this app
setting to false.
The absence of this app setting will result in the setting being
"true". In other words, if this app setting does not exist in your
app, you will see the /home directory mapped to Azure Storage. The app
setting will be missing if you created your app while Web App for
Containers was in public preview or if someone has deleted the app
setting.
Keep in mind that if you enable App Service Storage, when an Azure
Storage changeover occurs (which does happen periodically), your site
will restart when the storage volume changes.
Note: If App Service Storage is not enabled, any files written into
the /home folder will not be persisted across instances (in the case
of a scale out) or across restarts.
Even if storage persistence is disabled, the /home directory will be
mapped to Azure Storage in the Kudu (Advanced Tools) container. That
way, the /home/LogFiles directory will persist between restarts and
scale out operations in the Kudu container. Therefore, if you need to
get Docker logs or other logs, always use the Kudu Bash console
instead of using SSH to access your app's container. (See this for
more information on how to get the latest Docker logs from Kudu.)
Note: If you set this app setting on an Azure App Service on Linux app
using a built-in image, it will have no impact.
Source: https://blogs.msdn.microsoft.com/waws/2017/09/08/things-you-should-know-web-apps-and-linux/#NoStorage

How to mount a volume (Azure File Share) to a bitnami-based docker image on Azure (Web App for Container)?

I have the Matomo Docker Image from https://github.com/bitnami/bitnami-docker-matomo that I run in a Web App for Container on Azure with my own Azure Container Registry (ACR).
Also, I have an Azure Storage Account with a File Share available.
What I would like to achieve is to mount a persistent storage (File Share from Az Storage Account) to it so I don't loose the config and plugins installed of Matomo.
I tried using the Mount Storage (Preview), but I couldn't get it to work.
Name: matomo_data
Storage Type: Azure Files
Mount path: /bitnami
As described in: https://github.com/bitnami/bitnami-docker-matomo#persisting-your-application
This didn't work.
I also tried via the setting WEBSITES_ENABLE_APP_SERVICE_STORAGE = true on the Web App for Containers, but apparently seems not to do anything either.
I would appreciate any hints here, as otherwise I would have to make a custom docker image, push it to the registry, with a custom docker compose file, which I would like to avoid.
Thanks a lot in advance for any hints on this!
To mount the Azure File Share to the Web App for Container, as I think, it's not simple persistent storage, it's a share action. See the Caution below:
Linking an existing directory in a web app to a storage account will
delete the directory contents. If you are migrating files for an
existing app, make a backup of your app and its content before you
begin.
So, if you want to mount the file share to the web app to persist the storage, you need to upload all the files needed to the file share first. And the steps that mount the Azure File Share to the Web app are here. It shows for Windows, and for Linux is also the same way.
But I will suggest you'd better use the persistent storage following the steps here. This way will create persistent storage at the beginning and will not delete the directory contents.

Mapping a virtual directory to mounted file location in Azure file storage

In azure file storage, I have mounted a drive to Azure File storage location.
This mounted drive has images which need to be referred by all web apps and app services. How do I created a virtual Directory using mounted drive? So that I can use the virtual directory to refer the images.
example : \\{filestoragename}.blob.core.windows.net\images\ mounted to the Drive Z:
How to create a virtual directory called "images" pointing to Z: in my Web Application. So image will be referred using the www.domainname.com/images/demo.jpg.
Web App has "Virtual applications and directories" section. But it throws the error as If I try to refer Z: Physical path.
I see, seems you are using code publish web app (common windows Web Apps) instead of Linux Web Apps or Windows Containers Web Apps (docker/container publish) .
If you use Linux Web Apps or Windows Containers Web Apps , you can mount Azure storage related resources here directly :
However, if you are using common windows Web Apps, the config menu looks like below :
common windows web apps can not mount Azure storage related resources directly.
The only way to access Azure storage resources here is using Azure storage REST API.
We have got an approach that login to the Virtual Machine where we hosted the web application , Open the IIS and create a Virtual Directory which is referring the Blob storage location.
Another approach is ....
We have decided to Configure a Azure custom domain for accessing blob data in your Azure storage account, like www.media.domainname.com.
This approach helps us to refer static images placed in one location.
No need create virtual directory for each web site.

Azure File Storage - IIS - ASP.net application

I have a legacy asp.net application (EG:www.mycompany.com). There are around 10 folders inside that application and one of them has lot of images (reads/write) (EG www.mycompany.com/images/1.jpg) around 3 TB.
We are migrating this application to Azure VM. What we trying to do here is, keep all the 9 directories of that application inside the VM disk and move the images folder alone to Azure Storage.
So we created an Azure file share, created an local account with the same credentials as Azure Storage. Gave the local account IIS_USR group and then run the web application under this user.
We created a virutal directory called "images" inside the web application and linked that to say "\XXXX.file.core.windows.net\images".
The problem i am facing now is, we are able to read the file and show it in the web browser, but we are unable to upload a new image. When trying to upload an image from the web browser (thru the web application), it actually creates a folder called "images", because the code behind it uses server.mappath.
Is there any other alternative implementation without an code change.
We ended up creating a symbolic link for the images folder, that points to azure storage. Created a local vm user with same credentials as the azure storage account and ran IIS with that local user.
Everything worked fine.

Resources