I have a problem with my health rule configuration. All I want is to have health rule which will be checking if service is running or not. I have two types of services:
IIS
Standalone services
The problem is that some services are recognized as critical due to health rule violation. For example, I have two exactly the same services on two hosts and the only difference is that one of them is in use not so often. Due to lack of activity on this service appdynamics pointing me it as critical.
Most probably I have done something wrong. Any ideas?
I'm struggling with it as additional task. Tried appdynamics community website but nothing which could point me solution.
Here's my health rule configuration :
If you want to monitor only your worker processes of IIS and Standalone Service, You can use CLR Crash event on your policy configuration.
AppDynamics automaticaly creates CLR Crash Events If your IIS or Standalone Service are crashed.
You can find the details of CLR Crash Events:
https://docs.appdynamics.com/display/PRO45/Monitor+CLR+Crashes
Also, Sample policy configuration:
Policy Configuration Screen
Related
I have an application deployed in Kubernetes. I am using the Istio service mesh. One of my services needs to be restarted when a particular error occurs. Is this something that can be achieved using Istio?
I don't want to use a cronjob. Also, making the application restart itself seems like an anti-pattern.
The application is a node js app with fastify.
Istio is a network connection tool. I was creating this answer when David Maze made a very correct mention in a comment:
Istio is totally unrelated to this. Another approach could be to use a Kubernetes liveness probe if the cluster can detect the pod is unreachable; but if you're going to add a liveness hook to your code, the Kubernetes documentation also endorses just crashing on unrecoverable failure.
The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.
See also:
health checks on the cloud - GCP example
creating custom rediness/liveness probe
customization liveness probe
State of the application:
A single virtual machine which runs an apache server.
Application exposed via the virtual machine's public IP (not behind a loadbalancer)
I have an healthprobe endpoint running that needs probed every few seconds to see if the app is up, and trigger an alert in case it is not.
What are my options? I want to get the healthprobe up and running first, before I move to a virtual machine scale set and a load balancer.
You need something like a watchdog that calls the health endpoint at a given interval. In Azure you can use an availability test. You can then create alerts based on this availability and optionally create dashboards that show the status over a given period.
As a bonus you might integrate the application insights resource in your web app to get detailed monitoring. See the docs
Under Support+troubleshooting -> Resource health of your virtual machine portal panel, you can set up a health alert.
You can then select under which conditions the alert should be triggered. In your case, Current resource status: Unavailable should work just fine. You can also implement a custom notification (E-Mail) under Actions or implement a logic that triggers an Azure Function or Logic App that performs an action when the VM is unavailable.
To detect if your application in Apache server is working correctly you can use a monitoring solution that checks the Apache error logs.
If I have one instance of app service is it still recommended to have enabled Health Check (e.g. for monitoring purpose)?
If yes then what about functionality Always On? Doesn't it double requests which at the end do the same thing? I mean to keep application running without idle and check if there are server http errors.
Azure WebApp Always On and Health Check features are used for different use-cases.
Always On setting is used to keep the app always loaded. This eliminates longer load times after the app is idle. With the Always On feature, the front end loadbalancer sends a request to the application root.
Health check setting allows you to monitor the health of your site using Azure Monitor where you can see the site's historical health status and create a new alert rule.
You can disable Always On and just use Health Checks, that will cover both use-cases:
keep application running without idle
monitor the health of your site
I have a web application that is currently running on IIS in 3 Azure VMs. I have been working to make my application App-Services friendly, but would like to test the migration to App-Services in a safe / controlled environment.
Would it be possible to spin up the App-Service and use an Azure Load Balancer to redirect a percentage of traffic off the VM and onto the App-Service?
Is there any other technology that would help me get there?
You might be able to achieve this if you are using an App Service Environment and an internal load balancer
https://learn.microsoft.com/en-us/azure/app-service/environment/app-service-environment-with-internal-load-balancer
However, based on your description of your current setup I don't believe there is an ideal solution for this as a standard load balancer only allows for the backend ports to map to VMs. Using an Application Gateway might be another option as well
https://learn.microsoft.com/en-us/azure/application-gateway/
I would suggest you make use of the deployment and production slots available that comes a Web App. Once you have the webapp running in the dev slots, test the site to ensure all works as expected. Once it does, switch it to the production slot and reroute all traffic from the VMs to the App Service.
All in all, running an app on a Web App is quite simple. Microsoft takes away the need to manage the VM settings so you can simply deploy and run. I don't see you having any issues simply migrating. The likelihood for issues is small. You can also minimalism it by performing the migration during off hours in case you need to make any changes.
There is also some Web App migration guidance you might find useful
https://learn.microsoft.com/en-us/dotnet/azure/dotnet-howto-choose-migration?view=azure-dotnet
We have a WCF service which is hosted as a windows service with endpoints exposed over tcp. We need to migrate and host this on Azure Service fabric.
Would like to know which option within Service fabric would be better :
1. Stateless Service
2. Guest Executable
Also what are the steps to migrate the same.
Any pointers would be very useful.
Thanks
Avanti
Both solutions are suitable for you,
Guest Executable: You can migrate the service as-is without code changes, the only work required will be configure it in the application within SF, configuration like exposing the ports used by the service, define startup parameters, and required settings.
Stateless Service: You need to write the hosting of the service using SF application model, this will make changes to the original solution and might add changes to other dependencies, like for example if the service has 32bit dlls that does not run on 64bit.
I would recommend you start moving it as a Guest Executable, then move to Stateless Service in a later stage if you think you could make a better use of the platform features.
Regarding the guidance, you should be fine following the official documentation