I'm deploying files to an Azure Web App via Octopus Deploy, and want to clean out the Azure Web App directories before deploying new versions. This way I can be sure I'm deploying each app version onto a clean slate. I don't want to entirely delete and re-create the app, because there are some app settings that need to carry over from previous deployments.
Kudu documentation lists the web app file structure here (all under D:\home), but I'm wondering if there's any possibility of other files outside of the D:\home directory that could affect app performance.
I tried running get-childItem D:\ -recursive in the kudu powershell console before and after deployment to compare results and found 268 new files (not counting those in wwwroot) after deployment, all within these directories:
D:\Windows\Temp
D:\Windows\Logs
D:\Windows\security\logs
D:\Users\\AppData\Roaming\Microsoft\SystemCertificates
D:\home\LogFiles
D:\home\Microsoft\Windows\PowerShell
D:\home\data\aspnet\CompilationSnapshots
D:\local\VirtualDirectory0\LogFiles
D:\local\VirtualDirectory0\data
D:\local\VirtualDirectory0\site\wwwroot
D:\local\VirtualDirectory0\Microsoft\Windows\PowerShell
D:\local\Config\
D:\local\Temporary ASP.NET Files\msdeploy
So which files do I need to clear or reset in order to ensure that new versions of the web app run as intended? Is it sufficient to clear out the wwwroot directory?
The only writable folders are d:\home and d:\local. But d:\local is temporary, and gets wiped clean on app restart. So effectively, you should only be concerned about d:\home when it comes to deployment.
Within that, wwwroot is typically the most important, though if you set up virtual directories and applications, you can end up with other folders as part of your app.
See also https://github.com/projectkudu/kudu/wiki/Understanding-the-Azure-App-Service-file-system which has related info.
Related
I have created a new a new Azure Website (Code deployment) and noticed via the Kudu site, that the the files are now on C:\ instead of D:\, e.g. this is from the Kudu site:
Site folder: C:\home
Temp folder: C:\local\Temp\
A week ago I thought that all my stuff was located on D:\home etc.
I asked this, because we deploy a WebJob via the Zip-Deployment option and need to set an absolute path to our config file, which is now on the "wrong" drive.
Also the Kudu-documentation uses D: for all values.
Has this changed recently? It's not a major problem, but I want to understand why.
Even if the Kudu console takes you to C:\home, D:\home is still accessible and points to the same place but I recommend you to always use the environment variable %HOME% and avoid using absolute paths.
For example, if you plan to use Windows Containers on App Service in the feature, the %HOME% env var also points to C:\home
Additionally, if you use App Service on Azure Stack for example, the %HOME% env var will also point to C:\home
Occasionally, there are times when a system needs to undergo maintenance for a short time. Standard Web Apps handle this by redirecting all traffic to an app_offline.htm if the file exists in the root directory (wwwroot). What is the equivalent for a Linux Web App for Containers instance?
I tried using Kudu's Bash terminal by echoing the minimum html contents into an app_offline.htm but it isn't working.
One thing I was looking into would be having a specific container image that is for maintenance, but that doesn't seem very elegant.
Eventually, I would like to be able to automate this via Azure DevOps.
Are you able to create an app setting with the name SCM_CREATE_APP_OFFLINE and a value of 1 to see if this allows the creation of a app_offline.htm file?
My Azure web app (App Service) writes a log file mywebapp.log to the d:\LogFiles directory of the VM that hosts the website. When the log file gets to a certain size I rename it to mywebapp1.log, mywebapp2.log, and so on so and a new log file is created. (I do this manually - stop the website, rename the file and restart the site.)
One day I inspected the directory through the Kudu (SCM) portal and saw just a lone mywebapp.log that was much larger than normal. The file included all of the individual logs that previously existed (included the contents of mywebapp1.log + mywebapp2.log and so on).
My app has no which combines the files. Is there an Azure process that does this or did I do it in my sleep and have no recollection?
There really is no logic in Azure that would do this. Azure knows nothing about your log files, and would not be doing anything with them, especially something as complex as combining several existing files into one.
So I'll go with the sleep theory on this one :)
The problem was that I had swapped deployment slots at some point and failed to realize that the d:\LogFiles directory (the entire d: drive I believe) travels with the slot. The missing log files were sitting in my staging slot's LogFiles directory.
I have a web job that uses an exe that is best called when it is sitting in a directory and can be located. The problem is that I don't know how to get this exe to be published with the web job. I tried using a resources folder in the webjob project and copying them to output directory but that didn't upload them and so the only other option I can think of is uploading the files to a non temporary directory on the web site but that is leaking the encapsulation of the web job.
Any thoughts?
When you use visual studio to publish a webjob, it publishes all its dependencies as well. ie VS pushes all the dependencies available under the bin folder. So, add a reference to the dependent project and VS will take care of publishing this dependency as well.
In my code (which has worker role) I need to specify a path to a directory (third party library requires it). Locally I've included folder into project and just give full path to it. However after deployment of course I need a new path. How do I confirm that whole folder has been deployed and how do I determine a new path to it?
Edit:
I added folder to the role node in visual studio and accessed it like this: Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), "my_folder");
Will this directory be used for reading and writing? If yes, you should use a LocalStorage resource. https://azure.microsoft.com/en-us/documentation/articles/cloud-services-configure-local-storage-resources/ shows how to use this.
If the directory is only for reading (ie. you have binaries or config files there), then you can use the %RoleRoot% environment variable to identify the path where your package was deployed to, then just append whatever folder you refernced in your project (ie. %RoleRoot%\Myfiles).
I'd take a slightly different approach. Place the 3rd party package into Windows Azure blob storage, then during role startup, you can download/extract it and place the files into the available Local storage (giving it whatever permissions the app needs). Then leverage that location from your application via the same local storage configuration entry.
This should help you reduce the size of your deployment package as well as give you the ability to update the 3rd party components without completely redeploying your solution. And by leveraging it on startup, you can guarantee that the files will be there in case the role instance gets torn down and rebuilt.