Kentico - How to know if/when the whole website was down - kentico

Is there a way to know when/if the whole website was down? And even better, the reason that it was down. I don't have access to the servers, only have access to Kentico admin with global admin privileges. Thanks!

Down as in the user trying to visit the site is getting a 503 Error?
If it goes down because of an error in Kentico, you would be able to check the event log, but if it is a server error you would need to check the server event logs.
There are a bunch of services online that will notify you when your site isn't responding like Uptimer Robot

One of the options is integrate your Kentico App with Azure Application Insights.
You can configure
.Net Performance monitoring via usage analytics (server
resources like http response time)
Status monitor to diagnose IIS
issues on live running web sites (without re-deploying)
Usage
analytics for pages of the website (client side like Google
Analytics)
Automated stress testing System availability and health
monitoring (think uptime / downtime tracking)
Crash reporting for
apps and devices
http://www.mcbeev.com/Blog/April-2016/Application-Insights-for-Kentico

Related

Setting node.js site to auto-start when hosted in Azure Web app

I have a Node.js script that I want to run in Azure on a Web app.
This script is not an express web site, rather it's a worker script which polls a database for work to perform, and when done it just polls and waits, e.g. there is not user interface for it.
I notice that after deploying it, even though it's setup with iisnode, it won't actually start until I fire up a browser and navigate to the Azure Web app host, even though it doesn't have a UI.
Only when I navigate to it does iisnode start logging and fire up my application. Then it happily polls the database and performs the required work.
Does anyone know how you can make a site just automatically start when deployed?
There seem to be autostart web.config settings available with IIS, but I don't know how to get iisnode or the Azure Web app to support it.
I could set up a Web job on the machine that just performs a GET from the site, but that seems a bit of overkill and messy.
You can leverage Function App to satisfy your requirement. Also, your original solution which build an Azure Web App without UI should be work.
However, please pay attention that Azure App Services will be unloaded after they have been idle. You can enable the Always on application setting to keep the app loaded all the time. Please refer to https://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/#application-settings for more details.
Any further concern, please feel free to let me know.

Application gets very slow - Azure Web App

I have a Web site deployed on Azure Web App. My web site gets very slow at times. This behavior is random.
On checking IIS Logs during the period of slowness, I found few requests coming in where the Client IP Address is blank (It shows "-").
The response time of these requests runs into minutes and finally they result into HTTP 500 error. This happens only for the requests where c-ip is blank.
All other requests that have a Client-IP address are processed successfully. But because of the bad requests my application becomes very slow. I have to restart my Web App to resolve this issue.
What could be the possible reason behind these requests having a blank Client IP Address ? Could this be a malicious attack on the web site ?
Difficult to say. Could you add Application Insights service to your project? It allows you to see what is going on before and after 5 minutes of "this" request. The second reason can be the mode of your Azure Web App - is it free or shared or standard?
After AI added, you could share some more insights, because it is important to know what is that request about, not just the fact that it was processed.
https://azure.microsoft.com/en-us/documentation/articles/app-insights-start-monitoring-app-health-usage/

Monitoring/Alert Rules Requirements for Azure Web Apps, Server Farm vs WebSite

I'm looking to set up my alert rules and monitoring for an ASP.Net MVC application hosted as an Azure Web App, but I am a bit unsure of the nuances of monitoring in the cloud hosted environment.
To me the alerts associated with the WebSite (be it events or based on metrics) seem equivalent with what I'd like to have with an on-premise hosted site (ie. start/stop events, server errors, requests/Http 2..4 occurrence anomalies). The exercise is standard with monitoring any web server (from my understanding).
Having never administrated a server farm, I'm confused as to what metrics/events do administrators need to be alerted on with regards to the ServerFarm? The available metric list in Azure for alert rules is:
Data In, Data Out, CPU Percentage, Disk Queue Length, Http Queue Length, Memory Percentage. The available events are: delete, scale down, scale up. With regards to a server farm, when/what does ops need to be made aware of?
I think it's important to understand what a server farm is. For starters it is the same as the "App Service Plan" in the portal. What it is in practice is essentially a mapping between your worker servers and your websites within that App Service Plan, or server farm. This means that the metrics will be measured per worker server in your server farm.
If you only have one server and one site in your server farm, then these metrics would be equivalent to measuring that per site.
Thus if you were concerned about performance metrics such as high CPU use on your worker server machine you could configure an alert to notify you, or an autoscale rule to add more worker servers to serve your web site.

Microsoft Azure free website issue (NodeJS server)

I have a NodeJS server running on an Azure free website. The server has a websocket module installed. Each connected user will cache some data with an object so that anyone else who connects can retrieve cached data from this object. The problem I am experiencing is that the server doesn't seem to keep this object around for very long. I can access the data with in for some time, but if I try later in the day, it's just gone.
Is Azure shutting down the server because it is experiencing no activity, causing the object to be deallocated? Does NodeJS deallocate objects if they aren't used after some time?
Azure Websites, as Ben pointed out in his answer, will evict idle websites. This is especially true with free/shared tiers, since your website is sharing resources with several other tenants on the same VM instances. But even with basic and standard tiers, there may be a need to evict your website (especially since you can have many of your own websites running within a single hosting plan).
With basic/standard tier websites, you have the ability to enable Always On. You'll see this option under the Configure tab:
Once you enable this, your website should remain loaded.
Yep. If there aren't any requests to the app pool, Azure Websites stops your application. That means anything in memory is lost. You can set up a cron job or scheduled task to ping your web app to avoid the app pool timing out due to inactivity.
EDIT: Or, as David Makogon pointed out,
With basic/standard tier websites, you have the ability to enable Always On. You'll see this option under the Configure tab:

High [Data out] in short period of time. Azure website

i'm a new user at azurewebsites service and i'm encouraged why my website send too much data out. Server has 55GB data out in 8 hrs. My website does not contains any big files, its just a visit-card website.
How i can inspect what happens?
graph here http://ipic.su/img/img7/fs/server.1360266996.png
You can enable Web Server Logging in Windows Azure Website (Configure tab).
When the logs are enabled you can download them via FTP (address to your logs FTP can be found on the main - dashboard tab.
You will find all necessary details from the Web Server Logs like resources were downloaded and how many bytes were sent etc.
I hope that will help.

Resources