How to reduce Azure web app temp file usage - azure

I have a couple of small web apps hosted in Azure. I am using a shared app service, not VMs. Recently Azure has started showing warnings that I need to reduce my app's usage of temporary files on workers.
The link from the message doesn't provide anything useful for resolving this.
After restarting the app, the problem has gone away. Seems that temporary apps were cleared by doing a restart.
I am not sure what generated 179GB of temporary files and how I can reduce this. What should I look for? I am not explicitly storing anything in temporary files in code, data is stored in the database, so not sure what to look for.

The temp file usage is probably caused by one of your app that keeps creating temp files without cleaning them up. Restarting cleans up everything, but it's likely going to grow back over time if your app keeps doing the same thing.
There is no single solution to fix this. You'll need to figure out what causes this is your app logic, and change it to better clean up after itself.

Related

How to speed up slow Azure App Service Zip Deployment?

I am deploying a Nuxt application with Appveyor over Azure App Service Zip Deploy, but I am having trouble with the full deployment process being incredibly slow (Around 30+ minutes).
It seems that the whole build process is going as it should, but zipping the files takes around a minute or two (106 MB), and the file is successfully being pushed to the app service within a reasonable amount of time. However the incredible waiting time is at the Site Under Construction white page stage, taking down the whole website with it.
Does anyone have any tips to speed up this process besides upgrading
the App Service plan?
Is there any way to avoid the blank Site Under Construction page?
For example, is there a way to unzip to another folder and move the
files after everything is done so I would get minimal downtime on
the website?
You can add WEBSITE_RUN_FROM_PACKAGE=1 app settings
This option will deploy your app service to read only file system (read-only wwwroot folder, other folders are available for write operations)
Please take into account if you set this variable on ci/cd tool side probably the first time it won't apply because app settings are passed after zip deployment
And regarding the second questions. I guess it could be fixed by another app settings SCM_CREATE_APP_OFFLINE=0 (it should update your app without bringing it offline)

Save Logs on Heroku with Node

I'm trying to find a way to store logs so that they can be seen in my website.
I have a website hosted in Heroku, where I use a package like Winston to save logs to a .log file. The problem occurs when using this system in Heroku, as when the dyno restarts every day, the log file gets deleted and a brand new .log file is created.
What would be the best way to store all these logs without them being lost on a dyno restart?
PD: I don't monitor logs but I just want a simple way of storing logs to be viewed by people in my website. Right now it's done by reading the .log file.
One interesting option could be using Papertrail: there is a free plan and with it you get a REST API to query the logs (you can then customise what users see/download).
Papertrail has Heroku integration so pushing the logs from your application should be pretty simple. You can then query/export what your need implementing access via the REST API.
Heroku has also a Papertrail add-on which I think it is the same concept as above but running on Heroku cloud.
Obviously the free plan has a short data retention, you will need to see if this works for you.

Azure App Services Web App not registering update

I have a Azure App Service app that I'm trying to get deployed.
Today I ran into an issue where .NET informed me (via the yellow screen of death when I browse to the URL of my app) that I had a missing DLL (for the purposes of this question I don't think it really matters).
I used FileZilla to publish my changes in an attempt to do a manual deployment first and then work my way to automate it.
After so many attempts to fix it I later realized that the error message never changed. I did something more severe and renamed my bin folder into something completely different and the exact same error message would appear.
I've stopped the service, restarted it, and as mentioned, renamed folders, etc. and still the exact same error message persisted.
I also decided to open up the Azure Portal Console for my App Service app to browse a bit and to my amazement, nothing seemed to have reflected at all. The FTP shows one thing and the Console shows another.
Would anyone have any idea as to why this is happening?
I eventually got it to work and I will share what I tried.
I deleted the web app and created it again (I found this to be important the first time around). This was quite time consuming and did help but it wasn't long before the same problem happened again.
Then I finally found a solution that seems to give me consistent results:
I kept on editing the Web.config which seems to force a recompile and clear some sort of cache. So each time the web app stopped updating, I would make a slight change in the Web.config, upload it via FTP and the app finally updates.
If anyone has any more details on this, it would be greatly appreciated.

Deployment race condition causing CDN to cache old or broken files

Our current deploy process goes something like this:
Use grunt to create production assets.
Create a datestamp and point files at our CDN (eg /scripts/20140324142354/app.min.js).
Sidenote: I've heard this process called "versioning" before but I'm not sure if it's the proper term.
Commit build to github.
Run git pull on the web servers to retrieve the new code from github.
This is a node.js site and we are using forever -w to watch for file changes and update the site accordingly.
We have a route setup in our app to serve the latest version of the app via /scripts/*/app.min.js.
The reason we version like this is because our CDN is set to cache JavaScript files indefinitely and this purposely creates a cache miss so that the code is updated on the CDN (and also in our users' browsers).
This works fine most of the time. But where it breaks down is if one of the servers lags a bit in checking out the new code.
Sometimes a client hits the page while a deploy is in progress and tries to retrieve the new JavaScript code from the CDN. The CDN tries to retrieve it but hits a server that isn't finished checking out the new code yet and caches an old or partially downloaded file causing all sorts of problems.
This problem is exacerbated by the fact that our CDN has many edge locations and so the problem isn't always immediately visible to us from our office. Some edge locations may have pulled down old/bad code while others may have pulled down new/good code.
Is there a better way to do these deployments that will avoid this issue?
As a general rule of thumb:
Don't do live upgrades. (unless the language supports it, but even then think twice)
Pulling code using git pull and then waiting for the app to notice changes to files sounds a lot like the 90's: uploading php files to an apache web server using ftp (or sftp if you are cool) and waiting for apache to notice that they were updated. It can't happen atomically, so of course there is a race condition. Some users WILL get a half built and broken site.
I recommend only upgrading your live and running application while no one is using it. Hopefully you have a pool of servers behind a load balancer of some sort, which will allow you to remove them one at a time and upgrade them.
This will mean that users will be able to use both the old and the new site at the same time depending on how and when they access it, but that is much better then not being able to access it at all.
Ideally you would be able to spin up copies of each of the web servers that you have running with the new version of the site. Check that the new version does work, and then atomically update the load balancer so that everyone gets bumped to the new site at the same time. And only once everything is verified to be working perfectly the old machines are shut down and decommissioned, or reused.
step 4 in your procedure should be:
git archive --remote $yourgithubrepo --prefix=$timestamp/ | tar -xf -
stop-server
ln -sf $timestamp current
start-server
your server would use the current directory (well, a symlink) at all times. no matter how long the deploy takes, your application is in a consistent state.
I'll go ahead and post our far-from-ideal monkey-patch that we're using right now.
We deploy once which may or may not go as planned, once we're sure the code is deployed on all the servers we do another build where the only thing that changes is the version number.
Then we deploy again server by server.
The race condition still exists but because the application code between the two versions is the same this masks the issue since no matter which server the CDN hits it gets the "latest" code.

IIS Shared config - applicationHost.config Error: Cannot write configuration file due to insufficient permissions

I've setup a UNC share for IIS shared config using a specific AD service account and set to FULL CONTROL. I've also exported the config from one IIS server and set-up an additional IIS server to point to the share. When I open the applicationhost.config for example on the UNC share and remove an application pool, I can see the entry also remove in both IIS servers.
So I know:
1) I can export to the share with the specific service account
2) Both IIS servers can read the config when I edit manually
3) However when I remove an app pool from one of the IIS servers through the manager I get the above error.
I've tried using the process monitor utility to see what account is being used to write to the config and it seems it is my own AD user account rather than the shared service account. I know IIS Manager has my username e.g. ROOT\MYNAME logged on, but I wouldn't have thought it would use this to write changes to the shared config. Surely it would use the service account?
Does anyone know how to prevent this error? Why does the shared config and tied service account not come into play when making changes on one of the servers?
So, IMHO, this error is a red herring. I was publishing to a server and got a message saying I was out of space. So, I logged in, realized there was a bit of cruft in extra apps published in IIS, we didn't need. I right clicked and tried to remove one. I got the same error as you.
Having done some manual changes to applicationHost, I thought it "might be me" but it seemed very odd that editing this file would cause such a thing. However, I had recently learned that windows does some funky 32 vs 64bit machinations with this file (google it).
Deciding I had better things to do, I asked our IT to add space to the VM and guess what? I am no able to remove these apps. My guess is that I was at the end of the line on space and the backend management of these special files was not completing and throwing this not-so-helpful exception.
I'm not a 100% about this. For full disclosure, I will add that updates had been applied recently, but I'm pretty confident that this is a possible solution.

Resources