Availability test still running after application insight was deleted - azure

We had an Application Insight instance defined with 2 availability tests inside. I was not able to edit or delete the availability tests, so I just deleted the Application Insight instance. However, the availability tests are still running somehow. How can I get rid of them ?
I tried searching inside Azure for the Application Insight and Availability Tests, but they are gone.

Related

Newly added db tables on PostgreSQL are getting dropped and re-created during Strapi v4 startup when running on a blue-green deploy on AWS

We’re having a bizarre database issue using Strapi V4 (Node.js API framework)… Newly added PostgreSQL DB tables are getting dropped and re-created during startup ONLY when running on a blue-green deployment on AWS.
We’d appreciate help from anyone who has specific experience with this kind of situation using this framework or could lend us a senior/DevOps engineer for an hour or two to get a pair of fresh eyes on the problem.
We’re unable to reproduce this in local dev environments or cloud environments that use in-place deployment, so it’s been extremely troublesome trying to figure out the exact cause. (We can’t switch prod to in-place deploys because of uptime SLA with our clients.)

Can test and production share the same cloud kubernetes environment?

I have created a kubernetes cluster and I successfully deployed my spring boot application + nginx reverse proxy for testing purposes.
Now I'm moving to production, the only difference between test and prod is the connection to the database and the nginx basic auth (of course scaling parameters are also different).
In this case, considering I'm using a cloud provider infrastructure, what are the best practices for kubernetes?
Should I create a new cluster only for prod? Or I could use the same cluster and use labels to identify test and production machines?
For now having 2 clusters seems a waste to me: the provider assures me that I have the hardware capabilities and I can put different request/limit/replication parameters according to the environment. Also, for now, I just have 2 images to deploy per environment (even though for production I will opt for an horizontal scaling of 2).
I would absolutely 100% set up a separate test cluster. (...assuming a setup large enough where Kubernetes makes sense; I might consider an easier deployment system for a simple three-tier app like what you're describing.)
At a financial level this shouldn't make much difference to you. You'll need some amount of hardware to run the test copy of your application, and your organization will be paying for it whether it's in the same cluster or a different cluster. The additional cost will only be the cost of the management plane, which shouldn't be excessive.
At an operational level, there are all kinds of things that can go wrong during a deployment, and in particular there are cases where one Kubernetes resource can "step on" another. Deploying to a physically separate cluster helps minimize the risk of accidents in production; you won't accidentally overwrite the prod deployment's ConfigMap holding its database configuration, for example. If you have some sort of crash reporting or alerting set up, "it came from the test cluster" is a very clear check you can use to not wake up the DevOps team. It also gives you a place to try out possibly risky configuration changes: if you run your update script once in the test cluster and it passes then you can re-run it in prod, but if the first time you run it is in prod and it fails, that's an outage.
Depending on what you're using for a CI system, the other thing you can set up is fully automated deploys to the test environment. If a commit passes its own unit tests, you can have the test environment always running current master and run integration tests there. If and only if those integration tests pass, you can promote to the production environment.
It is true, that it is definitely a better practice to use a different cluster, as in your test cluster you could do something wrong (especially resource wise), and take down you prod environment, but if you can't afford it, and if you feel confident on k8s, you can put your prod environment in a different namespace.
I don't know on azure, but on GKE you can take the number of nodes to zero. If it is possible on azure, may be you can take the number of nodes of test environment to zero, whenever not using it, and get 2 clusters.
Its better to use different clusters for production and dev/testing. Please refer here for best practices

Alternatives for Application Insight reg:

I have an existing on-prem/Cloud environment in which am running my enterprise application and I would like to implement Application Insight to capture telemetry. But I have few issues on it. Are there any alternatives to use application insights? I have two concerns here:
1) it might not be possible to install softwares in production environment 2) restarting IIS Server would pull all the sites down at least for a minutes or two. It would be great if some one can suggest alternatives of leveraging these App Insights. Thanks in advance :)
there are 2 ways to use Application insights:
1) using the sdk, where you add the sdk to your service. At some point you have to deploy the service, so when you deploy, you'd also deploy app insights into that service
2) using status monitor, which does require restarting IIS. using status monitor isn't required, but does let you collect extra and detailed information that you wouldn't get from the sdk alone.
A lot of people end up doing both, (1) so they can do custom collection of events, traces, etc, and (2) to get detailed dependency calls
But like AlexB suggested, setting up something where you can swap between slots is one of the best ways to set things up, if possible, so you can just swap between the slots without having any downtime at all.

FluentMigrator Migration From Application_Start

I am currently changing our database deployment strategy to use FluentMigration and have been reading up on how to run this. Some people have suggested that it can be run from Application_Start, I like this idea but other people are saying no but without specifying reasons so my questions are:
Is it a bad idea to run the database migration on application start and if so why?
We are planning on moving our sites to deploying to azure cloud services and if we don't run the migration from application_start how should/when should we run it considering we want to make the deployment as simple as possible.
Where ever it is run how do we ensure it is running only once as we will have a website and multiple worker roles as well (although we could just ensure the migration code is only called from the website but in the future we may increase to 2 or more instances, will that mean that it could run more than once?)
I would appreciate any insight on how others handle the migration of the database during deployment, particularly from the perspective of deployments to azure cloud services.
EDIT:
Looking at the comment below I can see the potential problems of running during application_start, perhaps the issue is I am trying to solve a problem with the wrong tool, if FluentMigrator isn't the way to go and it may not be in our case as we have a large number of stored procedures, views, etc. so as part of the migration I was going to have to use SQL scripts to keep them at the right version and migrating down I don't think would be possible.
What I liked about the idea of running during Application_Start was that I could build a single deployment package for Azure and just upload it to staging and the database migration would be run and that would be it, rather thank running manual scripts, and then just swap into production.
Running migrations during Application_Start can be a viable approach. Especially during development.
However there are some potential problems:
Application_Start will take longer and FluentMigrator will be run every time the App Pool is recycled. Depending on your IIS configuration this could be several times a day.
if you do this in production, users might be affected i.e. trying to access a table while it is being changed will result in an error.
DBA's don't usually approve.
What happens if the migrations fail on startup? Is your site down then?
My opinion ->
For a site with a decent amount of traffic I would prefer to have a build script and more control over when I change the database schema. For a hobby (or small non-critical project) this approach would be fine.
An alternative approach that I've used in the past is to make your migrations non-breaking - that is you write your migrations in such a way they can be deployed before any code changes and work with the existing code. This way both code and migrations both can be deployed independently 95% of the time. For example instead of changing an existing stored procedure you create a new one or if you want to rename a table column you add a new one.
The benefits of this are:
Your database changes can be applied before any code changes. You're then free to roll back any breaking code changes or breaking migrations.
Breaking migrations won't take the existing site down.
DBAs can run the migrations independently.

Cloud service restored to the last production deployment

I've update production deployment yesterday morning then I've made changes to service files using remote connection
add and update files and everything was OK.
today morning all the changes I've done after deployment was undone and customers use the old version and this cost us hundreds of thousand of pounds
i need to know what's happen nothing appeared in operations log
Probably what has happened is that Microsoft has updated your servers at the Cloud Centre and re-deployed your application from the original deployment. This is in their terms and conditions, you should not make any important manual changes to the deployment after it is deployed unless they are stored in the portal (environment settings etc.), otherwise they might be lost during updates or reboots.
I learned this the hard way too. I had a cache role with only one instance (I thought it only made sense with one instance) and while updates happened, my whole site went down several times over several days!
PaaS services are stateless, which means the VMs running your service can be destroyed and recreated at any time, at which point the VM will be recreated with the content from your original .cspkg.
For more information see http://blogs.msdn.com/b/kwill/archive/2012/09/19/role-instance-restarts-due-to-os-upgrades.aspx and http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx.
As others have said, PaaS Web Roles are stateless. If you're making manual configuration changes to your deployed solution package after it has been auto-deployed then any re-deployment by the Azure fabric will simply deploy the package minus your manual changes. To solve this issue you could use startup tasks to apply your manual changes using a PowerShell script or similar (depending on what you're changing). See http://msdn.microsoft.com/en-us/library/jj129544.aspx.
Note that startup tasks don't just run when a machine gets re-imaged or rebooted.

Resources