I am exploring the SQL 2016 Stretch database features. Just wondering when we execute DBCC CHECKDB command on stretch enabled database, then does it verify the remote copy of the database also?
I tried to run this command on a stretch enabled database under below mention two scenarios
where Azure connectivity is present
I restored the database so that the connectivity to Azure has been broken.
I wondered to see that there is no error in DBCC command in both the above mentioned scenarios.
I didn't find any MSDN article on DBCC for Stretch database. please provide information on DBCC usage on Stretch DB's.
DBCC CHECKDB is not pushed to the remote DB for Stretch. There are already processes in place that do the equivalent of physical_only as part of the Azure operations so it would be burning additional CPU and storage for redundant work. The storage consumption will run up additional charges for sure though probably not a huge amount. Compute might run up additional charges depending on whether you need to bump the performance level to support the operation without affecting other workload.
Suggest filing a request on https://connect.microsoft.com/SQLServer/feedback/ to explicit document the recommended practice when it comes to SQL services in Azure (Stretch, DB and DW).
Related
I have been migrating some databases from a SQL Server to an SQL Managed Instance. 13 of 14 DBs have been successfully restored. There is only one remaining, the biggest one with almost 600 Gb. It has been more than a week continuously uploading the initial full backup and it is still running.
It is a big database but I thought it has been a long time and it should have been finished by now. For this reason I have been trying some cmd/az commands but I don´t get anything more than a running status.
The strange thing is that I can´t see the DB (in recovery mode) in the SQL Management Studio and the file has not been created yet in the container of the Storage Account. All the other databases appear in SSMS and in the storage account.
I had around 75 Gb more than the total size of the databases in the Storage Account, so I guess that was not the issue. In any case, I added 500 Gb more but still no results.
Is it possible to stop the task and restart it to see if this helps? Obviously I would not like to upload all databases again if possible.
Could you please help ?
Thank you!
As explained in the comments before, the best options for the migration of old SQL Servers in my case were:
Check regularly the cpu and network performance of the server.
When you configure your SQL MI, use at least the double storage size of the full DB backups size.
Finally, if you have big DBs, (my case more than 400Gbs), create different activities* to separate the small ones from the big ones. This would help as well if any errors happen into the big DBs. You won´t need to upload all of them again.
*NOTE. I had some issues when I had more than 2 activities: Some of them stayed in "Queued" Status and after a day still did not run. This happened even when the other activities were already completed. So, to fix this, I had to delete all the activities and create the remaining one again.
Have a good day.
I would recommend to open a case with Support to make sure there is no patching or failover happening on the SQL MI during Migration.
I have seen this happen before where the restore is going through for a VLDB and then patching on SQL MI causes it to restart restoring again.
Hopefully this will help
We are trying to restore one database backup, stored in Azure, to multiple SQL instances at the same time and are running into issues, getting error descriptions like "Desc=Open devices!" and "Desc=Create Memory. ErrorCode=(5)Access is denied.".
Is this possible? Or do they need to be restored successively?
SQL Azure has logic to do various operations online/automatically for you (copy databases, restore databases, backups, upgrades, etc.). Some operations are limited to 25 in parallel. There are IOs required to do each operation, so there are limits in place because the machine does not have infinite IOPS. (Those limits may change a bit over time as Microsoft improve the service, get newer hardware, etc.).
You can restore in parallel N databases from a database backup but still you have the IOPS limit. You can try a larger reservation size for the source and target during restore operations to get more IOPS and lower the time to perform the operations.
Try to create a bacpac of the database you want to restore and mix restore from backups with restore from bacpacs in parallel to workaround limits without adding IOPS and increasing costs.
What I am trying to achieve is similar to what you can see in mongo-azure repo, specifically, I want to write Azure Diagnostic Configuration File in such a way so that I can get all the instances of performance counters, for example
\Processor(*)\% Processor Time
and it doesn't seem to be working - no data is visible in the table in my storage account.
Is it achievable at all with configuration, and if so, how?
UPD: We were able to get this working for a simple single VM (so it is possible!), but for some reason it still doesn't work for VMs in VMSS where Service Fabric Cluster is running
UPD #2: We did upgrade to VS 2015 tools 1.5 and now it magically works. I am not really sure if that was the root cause problem or we screwed up anywhere else.
Is it achievable at all with configuration, and if so, how?
Based on the documentation here, it seems it is not possible. From this link:
Performance counters available for Microsoft Azure
Azure provides a subset of the performance counters available for
Windows Server, IIS and the ASP.NET stack. The following table lists
some of the performance counters of particular interest for Azure
applications.
Table that follows below only includes Processor(_Total).
This is possible. We do this in CloudMonix when monitoring VMSS and it is working.
How are you instrumenting Diagnostics and which tables are you looking at?
Specifying \LogicalDisk(*)\Available Megabytes yileds all drives and their free space, for example
I have recently just started working with firebird DB v2.1 on a Linux Redhawk 5.4.11 system. I am trying to create a monitor script that gets kicked off via a cron job. However I am running into a few issues and I was hoping for some advice...
First off I have read through most of the documentation that come with the firebird DB and a lot of the documentation that is provided on their site. I have tried using the gstat tool which is supplied but that didn't seem to give me the kind of information I was looking for. I ran across README.monitoring_tables file which seemed to be exactly what I wanted to monitor. Yet this is where I started to hit a snag in my progress....
After running from logging into the db via isql, I run SELECT MON$PAGE_READS, MON$PAGE_WRITES FROM MON$IO_STATS; I was able to get some numbers which seemed okay. However upon running the command again it appeared the data was stale because the numbers were not updating. I waited 1 minute, 5 minutes, 15 minutes and all the data was the same during each. Once I logged off and back on to run the command again the data changed. It appears that only on a relog does the data refresh and yet I am not sure if even then the data is correct.
My question is now am I even doing this correct? Are these commands truly monitoring my db or are just monitoring the command itself? Also why does it take a relog to refresh the statistics? One thing I was worried about was inconsistency in my data. In other words my system was running yet when I would logon each time the read/writes were not linearly increasing. They would vary from 10k to 500 to 2k. Any advice or help would be appreciated!
When you query a monitoring table, a snapshot of the monitoring information is created so the contents of the monitoring tables are stable for the rest of the transaction. You need to commit and start a new transaction if you want fresh information. Firebird always uses a transaction (and isql implicitly starts a transaction if none was started explicitly).
This is also documented in doc/README.monitoring_tables (at least in the Firebird 2.5 version):
A snapshot is created the first time any of the monitoring tables is being selected from in the given transaction and it's preserved until the transaction ends, so multiple queries (e.g. master-detail ones) will always return the consistent view of the data. In other words, the monitoring tables always behave like a snapshot (aka consistency) transaction, even if the host transaction has been started with another isolation level. To refresh the snapshot, the current transaction should be finished and the monitoring tables should be queried in the new transaction context.
(emphasis mine)
Note that depending on your monitoring needs, you should also look at the trace functionality that was introduced in Firebird 2.5.
We are using SQL Azure for our application and need some inputs on how to handle queries that scan a lot data for reporting. Our application is both read/write intensive and so we don't want the report queries to block the rest of the operations.
To avoid connection pooling issues caused by long running queries we put the code that queries the DB for reporting onto a worker role. This still does not avoid the database getting hit with a bunch of read only queries.
Is there something we are missing here - Could we setup a read only replica which all the reporting calls hit?
Any suggestions would be greatly appreciated.
Have a look at SQL Azure Data Sync. It will allow you to incrementally update your reporting database.
here are a couple of links to get you started
http://msdn.microsoft.com/en-us/library/hh667301.aspx
http://social.technet.microsoft.com/wiki/contents/articles/1821.sql-data-sync-overview.aspx
I think it is still in CTP though.
How about this:
Create a separate connection string for reporting, for example use a different Application Name
For your reporting queries use SET TRANSACTION ISOLATION LEVEL SNAPSHOT
This should prevent your long running queries blocking your operational queries. This will also allow your reports to get a consistent read.
Since you're talking about reporting I'm assuming you don't need real time data. In that case, you can consider creating a copy of your production database at a regular interval (every 12 hours for example).
In SQL Azure it's very easy to create a copy:
-- Execute on the master database.
-- Start copying.
CREATE DATABASE Database1B AS COPY OF Database1A;
Your reporting would happen on Database1B without impacting the actual production database (Database1A).
You are saying you have a lot of read-only queries...any possibility of caching them? (perfect since it is read-only)
What reporting tool are you using? You can output cache the queries as well if needed.