Quickly setting up Kudu on IIS for Horizontal Scaling - iis

I have been playing around with Kudu on an IIS development server and succeeded in making it deploy a hello world site etc.
But I was wondering if there was some resources on how to deploy Kudu in larger environments. So that you can quickly add new server nodes (virtual) to balance load.
Is there some approach to manage multiple Kudu deployments from a centralized location? I know It has an REST API and obviously we can probably use this, but that sort of requires some development, so was just looking to see if there was an existing solution out there.
But So far I have either searched after the wrong thing on Google, or there isn't much of what I need.
Does anyone have any experience in running it in larger environments with many servers?

Related

Use Google App Engine or Google Cloud Compute VM to Test Run My App?

I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.

Why is Azure Scale-Out Crashing the Web App

I'm currently running Umbraco on a web app for Microsoft Azure. Anytime I enable scaling out and the web app starts scaling out, I get the error:
"Process cannot access the file, Examine Indexes write.lock because it is being used by another file.
The website then needs to be restarted before it becomes fully functioning again. Is there a setting on Umbraco that I'm missing?
Or is it something that happens with Azure Web Apps Auto Scaling features?
This sounds like an issue with the indexes. Your index appears to be getting locked when scaling out. Ideally if you're running on a load balanced environment, you should have a single index for all environments instead of on a per instance basis. I've used Azure Search in the past and it's worked perfectly, swapping out the index isn't too difficult with Umbraco, plenty of information available online. Good example here
In the future you shouldn't need to restart the entire site, rebuilding the indexes should be fine.
Also, what version of Umbraco are you running? This may be of some help, I encountered some similar issues a few months ago - unrelated to scaling though.
https://issues.umbraco.org/issue/U4-10735
Sounds like you need to isolate your index files so they aren’t shared across the difference instances and don’t lock each other out. There’s a few ways to do this based on the version you are running, but in 7.3, i think you update the index file location to include the instance name like ~/App_Data/TEMP/ExamineIndexes/{machinename}/Internal/
For more details, see https://our.umbraco.com/documentation/getting-started/setup/server-setup/load-balancing/flexible#if-you-plan-on-using-auto-scaling

Front End Developer workflow for Service Fabric Web Apps

I'm a front end developer about to join a project team working with Service Fabric to build a Web Front End to their microservice driven application.
One of the problems I've been having in my own research is that when working with local Service Fabric Clusters, I have to redeploy my Application to test if something does or doesn't work in my Web App. This slows down developer velocity massively, as the process will only take longer and longer as other Back End services are added. I largely work with the Web App communicating to an API Gateway Service (GraphQL.NET).
What I'd like to know is if there's a way to run a local Web Application out outside of a Service Fabric cluster, but still have it communicate to one. This would allow my front end developer tool chain to remain intact, and develop at a much faster pace with incremental building and live-reload tools.
Of course, if anyone's come up with any better solution to the problem, I'd love to hear about it! ;)
We have a javascript front end (so this may not be applicable to you)- which means that there's a ton of front end library files etc. This was a nightmare to copy across to the cluster for testing and took forever. Theres a couple of ways I've been able to speed things up.
One is by keeping all the front end files in a separate project and using a build step to copy them across into the asp.net core project. So only the bundled/minified files are copied into the cluster when deploying.
Another option is to host these front-end files with a local node http-server which watches for changes etc and keep a static environment file where you can set the ip/hostname of your local cluster thats running. I use fiddler to redirect the hostname to the local ip, this way you can use the urls that you will use in production, which is handy. You'll need to set up cors though, which wasn't a problem for us.
So yes, definitely possible.

Alternatives for Application Insight reg:

I have an existing on-prem/Cloud environment in which am running my enterprise application and I would like to implement Application Insight to capture telemetry. But I have few issues on it. Are there any alternatives to use application insights? I have two concerns here:
1) it might not be possible to install softwares in production environment 2) restarting IIS Server would pull all the sites down at least for a minutes or two. It would be great if some one can suggest alternatives of leveraging these App Insights. Thanks in advance :)
there are 2 ways to use Application insights:
1) using the sdk, where you add the sdk to your service. At some point you have to deploy the service, so when you deploy, you'd also deploy app insights into that service
2) using status monitor, which does require restarting IIS. using status monitor isn't required, but does let you collect extra and detailed information that you wouldn't get from the sdk alone.
A lot of people end up doing both, (1) so they can do custom collection of events, traces, etc, and (2) to get detailed dependency calls
But like AlexB suggested, setting up something where you can swap between slots is one of the best ways to set things up, if possible, so you can just swap between the slots without having any downtime at all.

Is it possible to NGen dlls for use in Azure Websites?

We are currently using MVC3, .NET4.5, EF6.1, MSSQL2008(dev), SQL Azure(Test and Live). Our application is quite complicated and we are encountering significant warm up lags, around 30 secs, after an application pool refresh. We use External autoping services to keep the sites warm, which are OKish.... However it would be a much better solution to just deploy native images, and then whenever a app pool refreshes for whatever reasons, we know the application will load as quickly as possible.
Hence the reason for investigating NGEN.
However I am unsure whether this is possible for Azure Websites. Some questions I have:
1) NGen requires Admin privilege. As I understand it I would need admin privilege to install Native images to Azure Websites, or can I generate them on a local "same cpu" machine and copy them across?
2) Require Full Trust now. I believe this is no issue with WAWS.
3) Does NGen only install in Cache and not produce some sort of file for copying to a different location?
Thanks inadvance.

Resources