How to stop Kentico Event Log from getting huge? - azure

Just checked my Kentico database (Azure hosting) and it ballooned to 21GB. This happened fairly recently since 4 months ago it was just a bit above 1GB.
Checked the tables and my Event Log table has over 2,000,000 entries!!!
Nothing has changed recently, my settings under Settings -> System -> Event Log are still the same:
Event Log Size: 1000
Since globals are also set to 1000, usually I have 2000 or so entries in the event log table.
Anyone knows what happened here? And how to stop it from happening?

If you have online marketing turned on and have a popular site, there will be lots of data in the OM_ tables. But still 20GB sounds huge, are there lots of asset files been added to the Content Tree, like videos? Also, is the database set to log all transactions? What kind of log files are popular? Errors or Information logs? Do you have some custom code that could produce lots of logs?
You can also email Kentico support to get a "check big tables" SQL script which can help you find out which are the large tables.

You should look into a few other areas as well. Having 2MM event log records won't cause a 20GB jump in DB size from a Kentico perspective as the event logs are pretty minimal data.
Take a look in the analytics, version history, email queue, web farm and scheduled task tables. Also check out the recycle bin. Are you integrating with any other system or inserting/updating a lot of data via the API? If so, this could cause a lot of transactional log files to build up. With Azure SQL I don't know of a way to clean those up.
My suggestion is to check other tables and not just the event log. Maybe query the event log manually via SSMS and see what the top 100 events are and that might help you find the problem. If you need to you could probably clear the log too either through the UI or manually truncate the table using SSMS.

Related

Azure - never ending Full Backup Uploading in Database Migration Service

I have been migrating some databases from a SQL Server to an SQL Managed Instance. 13 of 14 DBs have been successfully restored. There is only one remaining, the biggest one with almost 600 Gb. It has been more than a week continuously uploading the initial full backup and it is still running.
It is a big database but I thought it has been a long time and it should have been finished by now. For this reason I have been trying some cmd/az commands but I don´t get anything more than a running status.
The strange thing is that I can´t see the DB (in recovery mode) in the SQL Management Studio and the file has not been created yet in the container of the Storage Account. All the other databases appear in SSMS and in the storage account.
I had around 75 Gb more than the total size of the databases in the Storage Account, so I guess that was not the issue. In any case, I added 500 Gb more but still no results.
Is it possible to stop the task and restart it to see if this helps? Obviously I would not like to upload all databases again if possible.
Could you please help ?
Thank you!
As explained in the comments before, the best options for the migration of old SQL Servers in my case were:
Check regularly the cpu and network performance of the server.
When you configure your SQL MI, use at least the double storage size of the full DB backups size.
Finally, if you have big DBs, (my case more than 400Gbs), create different activities* to separate the small ones from the big ones. This would help as well if any errors happen into the big DBs. You won´t need to upload all of them again.
*NOTE. I had some issues when I had more than 2 activities: Some of them stayed in "Queued" Status and after a day still did not run. This happened even when the other activities were already completed. So, to fix this, I had to delete all the activities and create the remaining one again.
Have a good day.
I would recommend to open a case with Support to make sure there is no patching or failover happening on the SQL MI during Migration.
I have seen this happen before where the restore is going through for a VLDB and then patching on SQL MI causes it to restart restoring again.
Hopefully this will help

Azure Websites automated and manual backups are not created

Whilst accepting that Backups in Windows Azure Websites are a preview feature, I can't seem to get them working at all. My site is approximately 3GB and on the standard tier. The settings are configured to move to a Geo-Redundant storage account with no other containers. There is no database selected, I'm only backing up the files.
In the Admin Portal, if I use the manual Backup Now button, a 0 bytes file is created within the designated storage account, dated 01/01/0001 00:00:00. However even after several days, it is not replaced with the 'actual' file.
If I use the automated backup scheduler, nothing happens at all - no errors, no 0 byte files.
Can anyone shed any light on this please?
The backup/restore feature is still in a preview mode and officially supports only 2 GB of data. From the error message you posted ("backup is currenly in progress") it seems you probably hit a bug which was there and was fixed last week (the result of that bug was that there were some lingering backups which blocked subsequent backups).
Please try it again, you should be able to invoke it now. If you find another error message in operational logs, feel free to post it here (just leave the RequestId in it unscrambled - we can correlate using that) and we can take a look.
However, as I mentioned in the beginning, more than 2 GBs are not fully supported yet (you might not be able to do e.g. roundtrip with your data - backup and then restore).
Thanks,
Petr

SQL query takes time

HI I have my application running on my production server perfectly, I updated the application some 2 days ago and since then I experienced some performance related issue. The issue is when you click the button the query that runs against that button needs some 1 min or more to pull out the result and my application thus shows time out error but the same application runs fine here in my local.
I don't think its the issue related to query optimization since its simple select query having joins b/w 2 tables and its some 40-50 records pulled.
I am using SQL 2012 database, Is there any setting needs to be done on it?
Could be anything dude, without having all the information. E.g. recent degraded disk on DB server, fragmented indexes, inefficient joins, inefficient query...
Here are some general tips...
User Performance Monitor (PerfMon) to capture and review any long running queries
DBCC FREEPROCCACHE
http://msdn.microsoft.com/en-AU/library/ms174283.aspx
DBCC INDEXDEFRAG
http://msdn.microsoft.com/en-au/library/ms177571.aspx
or more intrusive, rebuild indexes DBCC DBREINDEX
http://msdn.microsoft.com/en-us/library/ms181671.aspx
Review the health of the database server, in particular harddrives where data and log files reside. Also, check server CPUs usages.

How to limit the size of Azure Table Storage for logs?

Is it possible to limit the size of an Azure Table Storage table? I'm using it for storing logs. Also how can I do something like, when the limit is reached, the old entries are deleted to make space for the new ones? Something like capped collections for MongoDB or Round-robin databases?
Any help would be greatly appreciated. Thanks in advance!
Somewhat remarkably: no, there's no way (currently) to do this that I'm aware of.
We had the same situation, and we now use the Cerebrata Diagnostics Manager (http://cerebrata.com/Products/AzureDiagnosticsManager/) to purge them periodically.
It is also possible to explicitly drop the WAD* tables, but you may see issues if you have an instance still running when you do this. From http://social.msdn.microsoft.com/Forums/en-AU/windowsazuretroubleshooting/thread/3329834a-ddae-4180-b787-ceb7aee16e83:
#Sam --> I would be careful about deleting the table. Deleting the
WAD* table is a viable option if you don't have too much data in it.
What happens when you delete a table is that it is not deleted at that
very moment however it is marked for deletion and some background
process actually deletes that table. If you (or the diagnostics
process) try to create the same table you would get an error that
"Table is being deleted".
You can also use Visual Studio to purge logs from a given time: the above link includes a way to do that. I have a feeling that a Powershell script could also be written to do this.

Creating a sub site in SharePoint takes a very long time

I am working in a MOSS 2007 project and have customized many parts of it. There is a problem in the production server where it takes a very long time (more than 15 minutes, sometimes fails due to timeouts) to create a sub site (even with the built-in site templates). While in the development server, it only takes 1 to 2 minutes.
Both servers are having same configuration with 8 cores CPU and 8 GIGs RAM. Both are using separate database servers with the same configuration. The content db size is around 100 GB. More than a hundred subsites are there.
What could be the reason why in the other server it will take so much time? Is there any configuration or something else I need to take care?
Update:
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Update 2:
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
What should I check next?
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Thanks for the inputs everyone, I really appreciate.
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
Any idea what should I check next? Thanks a lot.
Yes, its normal OOB if you haven't turned the Second Stage Recycle bin off or set a site quota. If a site quota has not been set then the growth of the Second Stage Recycle bin is not limited...
the second stage recycle bin is by default limited to 50% size of the site quota, in other words if you have a site quota of 100gb then you would have a Second Stage recycle bin of 50gb. If a site quota has not been set, there are not any growth limitations...
I second everything Nat has said and emphasize splitting the content database. There are instructions on how to this provided you have multiple site collections and not a single massive one.
Also check your SharePoint databases are in good shape. Have you tried DBCC CHECKDB? Do you have SQL Server maintenance plans configured to reindex and reduce fragmentation? Read these resources on TechNet (particularly the database maintenance article) for details.
Finally, see if there is anything more you can do to isolate the SQL Server as the problem. Are there any other applications with databases on the same SQL Server and are they having problems? Are you running performance monitoring on the SQL Server or SharePoint servers that show any bottlenecks?
Backup the production database to dev and attach it to your dev SharePoint server.
Try and create a site. If it does not take forever to create a site, you can assume there is a problem with the Prod database.
Despite that, at 100gig, you are running up to the limit for a content database and should be planning to put content into more than one. you will know why when you try and backup the database. Searching should also be starting to take a good long time now.
So long term you are going to have to plan on splitting your websites out into different content databases.
--Responses--
Yeah, database size is all just about SQL server handling it. 100GB is just the "any more than this and it starts to be a pain" rule of thumb. Full Search crawls will also start a while.
Given that you do not have access to the production database and that creating a sub-site is primarily a database operation, there is nothing you can really do to figure out what the issue is.
You could try creating a subsite while doing a trace of the Dev database and look at the tables those commands reference to see if there is a smoking gun, but without production access you are really hampered.
Does the production system server pages and documents at a reasonable speed?
See if you can start getting some stats from the database during the creation, find out what work is being done. SQL has some great tools for that now.

Resources