When creating an Azure function with an Dedicated (Standard) App Service Plan the file service I'm expecting to get "linked" up isn't getting "linked" to the storage account. However the Storage account does get created correctly. When I go to the Azure Storage Accounts blade and find the file storage, Azure doesn't have the file service linked to the Storage Account. I don't see any linked File Shares when using the Windows desktop software, Microsoft Azure Storage Explorer (0.9.6).
When I go to the Azure Function's Advanced Tools (Kudo) I can see the storage account folders "Data", "LogFiles", and "Site" with the wwwroot I'm expecting to find. However due to certain network restrictions I can't upload the code through the web-site so that option is out.
When creating a Consumption based plan everything links up nicely and I can manage them in the Azure Storage Explorer app. How can I get my already created File Service File Share linked to my already created Azure Function so I can manage it in the desktop app and get things appearing correctly linked up in the Storage Explorer?
There's a solution for you to refer, as it works on my side.
In portal, Application Settings Tab, add two application settings below:
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING storage account connectionstring
WEBSITE_CONTENTSHARE file share name
And some explanations for you.
When creating a function app, the storage account we specified is mainly used to store logs and files like host locks.
Only for function app created in consumption plan, it automatically adds the two application settings above and uses File Share to store whole function app by default.
As Azure document says, the File Share related settings is for consumption plan only. It seems not an expected operation for function created in app service plan, but it works in practice anyway.
Update
For a function app created in app service plan, assume its files are stored in place A(somewhere on the server). It works well and kudu displays files stored in A. So far it has nothing to do with file share.
Then we add the two settings, and assume the file share is B.
System retains old files(if exist) in B and creates an empty function app in it. From now on, system targets at and leverage files in B, as long as the "link"(two settings) exist. In portal, kudu or app service editor, we see files in B and also changes will be saved there.
And if we delete the "link", everything returns to A. Need to wait a little while for system "redirecting".
All explanation is based on my test(dozens of times) since it's an unexpected operation and has no document description.
Related
Our platform is UiPath Cloud Orchestrator with DEV/Test/Prod tenants. The Robots are hosted on Azure Windows instances for DEV / Test / Prod.
I created an UiPath App to upload an Excel file, store it in a storage bucket, then start a unattended process to read the excel file.
The UiPath app points and binds to the storage bucket and process in the DEV tenant.
I would like to deploy and migrate the UiPath app to point to the Test Tenant. It seems like that to point to Test, I have to change the app in the app studio to switch the pointers/bindings for the process and storage bucket and replace them and change or confirm the UI elements are correct.
Does anyone know if there is a better way to do migrate the app to another tenant for the UiPath apps?
It does not seem right to have to change the pointers this way. It only just allows us to point to one tenant at a time so hard to really have DEV/Test/Prod instances of the Uipath app without having copies of the app for each tenant.
I can export the app (.uiapp file) and import the file across Cloud platforms but not across the tenants without changing the name of the app. The .uiapp file seems to be a json format with the bindings in the file with specific ids etc. Changing the pointers and bindings here would be error prone as well.
I have looked through the documentation, the uipath academy training and the forums which do not provide an answer.
Appreciate the insight!
As your using the Azure Host, I'm assuming that you might be using the Azure DevOps for your packaging and publishing.
Have a look at the package below:
https://marketplace.visualstudio.com/items?itemName=uipath.vsts-uipath-package
You can set the Tenancy level so that it's packaged for that specific tenant, can create any assets etc.
I have noticed that if Azure function e.g 'My-Function-App' is deleted using Azure portal and a new app with same name 'My-Function-App' is created in same resource group using Visual Studio 2017, then all old functions return to that new function app in read-only mode. Any idea what is happening here?
What #David has said is right, and I want to provide some details for you to refer.
all old functions return to that new function app
For Azure functions created in Consumption plan, one function app has one individual file share folder in the storage account you specified. You can see the file share name in your application settings tab in portal.
For functions created in portal, the file share name corresponds to the function app name plus an unique suffix generated by azure, like myfunctionae23.
For functions created using VS, the file share name is exactly the same as the function app name, like myfunction.
So if we publish a function app from VS, and its name happens to be identical to some existing file share which was used to store functions, these functions will be restored in the new function app.
in read-only mode
It is set by Azure due to publishing functions from VS. It's not recommended to make changes to files developed locally.
If you want to solve this with minimum changes, in publish panel, click Manage profile settings and click Remove addition files at destination and publish again.
When a Function App is created, a Storage Account also gets created alongside in the same resource group. If you delete both (or delete the whole RG), you will see no traces of previous apps.
My web.config contains multiple entries in "appSettings" (e.g.: twilio account key). One of these is for the asp.net chart control. It's the configuration part that states where the images the control generates are to be stored.
All of these settings work on my development machine. That is, i can connect to twilio and the chart control stores image in memory (as it should, according to the settings).
When i publish the site to my azure website (using vs), all of the settings work, apart from the chart control one. The chart control behaves as if the setting isn't even there. (it defaults to c:\TempImageFiles for storage).
I looked into the published version of the web.config and the setting is there. Only, it's beeing ignored.
My next attempt was to add that setting using the portal. (It's possible to add appSettings for a web app using the portal). I copied the exact same setting from web.config into the portal settings. This worked, so there is nothing wrong with what's in the settings.
So my question is: Why are some (at least this one) settings from web.config ignored when the app runs inside an azure web app?
You might have an app setting defined in the Web App's configuration with an identical name that overrides the web.config setting. This is typically done to have production settings stored in Azure instead of Web.config.
You can confirm if this is the case by opening your Web App's blade in the new portal, and checking the Application Settings tab there.
azure websites / azure web app service are typical web applications running on top of azure PaaS infrastructure. So whatever storage allocated to the service is accessible from the app. But it cannot be the typical C: or D: where in a regular server the app may have complete access. Mostly the C: space is allocated for IIS hosting. D:\local is something you can utilize as the app will have complete read and write access.
Please refer azure web app service sandbox details here.
If you are accessing the path via code try using Server.MapPath property to get access to the path. options like Path.GetTempPath() will not work.
One point to note is, any local storage in azure PaaS services is to be treated like a temporary storage. Whenever the site, service or role recycles the storage will be gone a fresh storage will be assigned.
We have an application that we would like to migrate to Azure.There is a requirement in which a virtual temporary folder is created if not exist and a dynamic html page is to be created based on some user input which has been done in asp.net previously.How this can be achieved in azure web role.
I tried this requirement by adding a folder name called "Preview" and generated the html file under the preview folder.This works fine in the development machine(local). But it is throwing an exception in the live environment like
Directory Not found Exception (E:\siteroot\0\preview\previewbasic.htm).
I would like to preview the html page which was created dynamically in the preview folder if we specify the url in the browser as
[siteurl].cloudapp.net/preview/preview.htm.
My question is
1.Can we create a dynamic folder called "preview" in azure web role.
2.Can we generate a html file say(preview.htm) under this folder in web role.
3.Can this file be accessed as [siteurl].cloudapp.net/preview/preview.htm
I hope the question is somewhat clear now.
Any help would be much appreciated.
The local file system is not advised for Web Roles due to the fact that the storage is not guaranteed to be persistent -- if the web role is created using another VM then the files created in another instance will not be available. Therefore, you are better off using Azure Blob Storage. Files created via Blob Storage would be available among all instances of the Web Role.
Answers to your questions:
Create the container in Azure Blob Storage
Yes you could create the html and save it to the container created
You could make the URI for this blob public and then access it via your custom URL -- you'd have to create a CNAME for the storage container first.
Here is a good resource on how to use Blob Storage from .NET:
http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/
I am trying to deploy a large web site to Azure as a Web Role. However, Azure on the Instances tab of the Azure dashboard, it tells me it suffers an error during start up, causing it to restart over and over again.
Where can I find log files that will tell me what specifically is going wrong? The manage.windowsazure.com site doesn't seem to have any.
First, debug on your dev machine. Make sure you deployed the right .cscfg file, you don't have any broken connection strings, you're referencing the right version of the DLLs (the same as Azure's VMs) or are copying newer versions to Azure. If those fail, read this topic on WindowsAzure.com and the topics in this node on MSDN. The Hello World code sample also has a basic demonstration of diagnostics that should be helpful.
The basics of diagnostics in Windows Azure:
Must be manually enabled for each role by importing the Diagnostics module in your ServiceDefinition.csdef file
A storage location needs to be configured for the resulting logs in your ServiceConfiguration.cscfg file, such as the storage emulator, or a Windows Azure Storage account. Depending on the types of logs, they are stored in either blobs or tables.
You can either configure diagnostics collection programmatically or with a file that is read when your role starts and can be updated on-the-fly
You can set up and control how often diagnostics data is downloaded to your storage account (important because transactions/transfer/storage costs money), performance counters, or other metrics you need
There are a series of 4 blog posts at http://blogs.msdn.com/b/kwill/archive/2013/08/09/windows-azure-paas-compute-diagnostics-data.aspx which will walk you through step by step how to troubleshoot a role startup failure including log file locations, etc.