Recently we've had one of our web services fail its Azure database backups with the following error.
The specified point in time, '11/24/2021 16:57:50', is not valid for database 'R6tvK6Nc9W'. Valid points in time should be between '11/24/2021 16:59:48' and '11/24/2021 17:03:50' inclusive.
The timestamps vary, but in each case the top of the time range is exactly 6 minutes from the specified time. The error is coming from Azure, and up until recently everything has been working fine without any issues, no major code changes have occurred on our end for some time.
I'm more or less new to Azure, having only recently taken over from the previous developer who worked on this, so if this is something obvious or common I apologise, it wasn't covered in the handover I was given.
Check if the database is part of Failover Group and remove it before starting the restore.
Related
I've been using functions for a while and it seems the longer the Function is around, the less accurate the Portal logs are. When I first was using my functions for maybe 3 months everything monitor/logging wise was fine. Over time things starting getting less accurate.
Now I see the real logs by going to the ms azure storage explorer and checking the AzureWebJobsStorage.
First when I bring up the code/logs the last log it brings up isn't accurate. It will be from a few days ago usually, or the last error. When it triggers though, it does get the live feed. This isn't that big a deal, it's the monitor being inactive that and not being able to see the logs from that which is bad. I suppose I just use the Azure Storage explorer.
Monitor Invocation Logs, always seems a few days behind. This used to be accurate, but the last month or so, it's always a few days behind
Dan,
The local, file based logs, exist primarily to support the portal experience, so the behavior you're observing on the log window is expected as the logs are not written by the runtime as part of the normal invocation process, but only when you're actively developing/testing on the portal.
The issue you're experiencing with the monitor is due to a regression that has been patched and should be fully rolled out today (you can see more details here)
We've been listening to feedback on our logging capabilities, and there has been a lot of investment in that area, resulting in the recently announced built in integration with Application Insights. That integration addresses some of the pain points you've brought up as well as other issues, so I'd strongly recommend trying it out. You can find more information about it here.
The Azure AD differential query works well and fast when we query the difference between actual Azure AD and previous state not older than 30-60 minutes. But when we query for a week ago or month ago – it takes 10 minutes to return changes – even if Azure directory is small and there are 3-4 changed attributes for this period of time, what is very slow. Is it expected behavior? Are there any workarounds?
Based on the test, it is not a expected behavior. My first differential query request stated at 10/10/2016. And today I test the differential query REST using the Fiddler cost about 30 seconds.
To narrow this issue, I suggest that you call the other service or call this REST in different network to ensure the issue is not caused by the network. Other Azure Graph REST is also recommend to test to see whether the issue is Azure Active Directory related.
For sure it's not... I can query 21K users in 3-4 minutes over a 24 Mbit DSL with partial properties (only those I want) and less than 10 minutes for all properties (and objects have almost all properties set so deserialization if fully in effect)
Delta queries, a few seconds, always.
Are you using your own routine over a basic HTTP client or are you using classes provided in the MS.Azure.AD assembly?
Whilst accepting that Backups in Windows Azure Websites are a preview feature, I can't seem to get them working at all. My site is approximately 3GB and on the standard tier. The settings are configured to move to a Geo-Redundant storage account with no other containers. There is no database selected, I'm only backing up the files.
In the Admin Portal, if I use the manual Backup Now button, a 0 bytes file is created within the designated storage account, dated 01/01/0001 00:00:00. However even after several days, it is not replaced with the 'actual' file.
If I use the automated backup scheduler, nothing happens at all - no errors, no 0 byte files.
Can anyone shed any light on this please?
The backup/restore feature is still in a preview mode and officially supports only 2 GB of data. From the error message you posted ("backup is currenly in progress") it seems you probably hit a bug which was there and was fixed last week (the result of that bug was that there were some lingering backups which blocked subsequent backups).
Please try it again, you should be able to invoke it now. If you find another error message in operational logs, feel free to post it here (just leave the RequestId in it unscrambled - we can correlate using that) and we can take a look.
However, as I mentioned in the beginning, more than 2 GBs are not fully supported yet (you might not be able to do e.g. roundtrip with your data - backup and then restore).
Thanks,
Petr
I have a number of small MVC apps deployed as Microsoft Windows Azure websites. This has been working for several months.
Yesterday I rolled out a new one, and the deployment was unremarkable, everything worked fine. But a couple of hours later, access to the site was unavailable. The symptoms were that when the browser tried to navigate to the URL for that site, it would try to load for several minutes and then just give up with a completely blank page.
I attempted to stop and restart the site, and it worked once, but the symptoms came back several minutes later. Then I tried to stop and restart, and it didn't work.
I deployed the identical app to three additional URLs. Again, immediately on deployment, they all work fine, however, they fail at some interval in the future. They seem to not all fail at once. Sometimes restarting the site will fix the problem, and sometimes not.
IMPORTANT: If I wait for some period of time, the site may start to work again on its own.
However, deploying four versions of the app so that our users can go to a backup one if the primary one is not working is not optimal.
Any words of wisdom as to how I might go about debugging this?
ADDITIONAL INFO NOV 25, 2013:
When sites are failing, the IIS logs show either 500 or 502 Internal Service Errors. Our own MVC code is never hit, not even app_start.
You can start by checking the logs and remote debugging
http://www.drdobbs.com/windows/azure-sdk-22-supports-visual-studio-2013/240163499
Are the apps working locally?
Might not be the same problem, but from time to time our Azure instances will get the blue question mark of death as a status.
The reason we found out was that Microsoft will do upgrades on instances from time to time. If you have just one instance in a cloud service/role, then from time to time they will do maintenance and during that time it will be dead.
I have confirmed this with their support.
The only way to get around this that I know of is to create two instances. Then Microsoft guarantees ~99% availability.
Of course I also confirmed with them that this means twice the cost. =/
If that's not the issue I would enable RDP and get onto the machine to see what the problem is. Microsoft has these tools to help debug problems: http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx
First, you should always run multiple instances of your web role with more than 1 upgrade domain. This is configurable in the service definition (CSDEF). Without this, you don't get an SLA from Microsoft, so you can't really complain that the VMs go down.
Second, to figure out what might be going on with these boxes, you should have both logs (my preference is to roll my own with page blobs or table storage), AND you should always have RDP access to a pre-production environment (production as well if you're not too fussed about security). Once on the box, look through the event viewer for errors.
Third, when an outage occurs check out the azure service dashboard (http://www.windowsazure.com/en-us/support/service-dashboard/) for outages.
Lastly, contact Microsoft support. It may take a few hours, but they are pretty good.
That it is happening repeatedly and for extended periods of time (more than 5 minutes), I would be there's something wrong with your hosted service. Again, RDP in and poke around. Good luck.
To debug your sites try to enable diagnostic logs:
http://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics-logging-and-instrumentation/
Another nice way to look around your site is using the debug console:
https://github.com/projectkudu/kudu/wiki/Kudu-console
I have two free subscriptions for windows azure and because I exceeded the limit on the first one, Microsoft closed it down. So I tried to deploy my application from the other subscription, and changed a few settings, and it seems to take a lot longer and the dns name of the depolyed application (in production area) does not seem to work. (I've been waiting for about 15 minutes.. in the other subscription it was almost immediate that the link started to work..). Also my webrole seems to be in a state of busy for a very long time..
The application always worked fine and now I'm getting all this trouble just by switching subscription?? I'm getting really frustrated with this especially because I all worked perfectly before. Now I have to 'waste' my time getting all the things to work again and I can't start with anything new. I don't think this is normal but I can't seem to find the solution to this either.
edit:
Over half an hour the dns finally started working but this still does not fix the problem with the extreme slow deploying and the busy state of the webrole..
Please study the discussion below to understand why the time to deploy an application could vary between 10-30 minutes:
Is there a way to reduce time between Azure deployment start and role OnStart() code being invoked?
Above details will helped you to get the answer about your statement ".. this still does not fix the problem with the extreme slow deploying and the busy state of the webrole.."..
To add more about that, when your application is deployment phase it goes through several state and in some cases the time taken in one state could be longer then expected and during this time you will see status as "Busy", "Initializing", "Starting.." etc and these state actually explain which level you are during your deployment. I hope this helps you to understand the time taken during deployment.