Max text Repl Size equivalent quey in Azure Sql Database - azure

We are trying to execute the below statement in Azure SQL Database. Is there any workaround this or equivalent query for Azure SQL Database.
EXEC sp_configure 'show advanced options',1
RECONFIGURE WITH OVERRIDE
EXEC sp_configure 'max text repl size (B)', -1
RECONFIGURE WITH OVERRIDE
EXEC sp_configure 'show advanced options',0
RECONFIGURE WITH OVERRIDE
can this bet set or changed from Azure portal?
Any help will be appreciated. Thanks a lot :)

The problem was some of the data was getting truncated when we enabled cdc on the azure sql database. We wanted to execute above tsql statement to alter the replication size but we couldn't because of restrictions on product side. We implemented change tracking in order to get around the problem

Related

Azure Data Factory - Order of actions inside the Copy Activity

I have a tricky question about the "Copy Activity" in ADF. Assume the following scenario:
Source: an external API or an non-Azure database using hosted integration runtime.
Sink: an Azure SQL Server database.
The "pre-copy Script" field has a command to delete some data from the sink table (why deleting is out of scope of the discussion).
When the pipeline runs, the connection with the source fails (due to a time-out, network issue, authentication, etc.)
The question is: will the pre-copy script run in this case? Or the script only runs after ADF successfully connected the source data store? I couldn't find any reference about it.
I can just try to simulate it and see what happens, but I'm hopping someone can save my time. :)
Thanks in advance!
Per my experience about Data Factory, the pre-copy script won't run.
As I understand, we can consider it as a workflow, connect to source--> get data from source-->connect to sink-->run the pre-copy script-->write data to sink. No mater which step failed, data factory will stop run.

PDI slow loading into azure databases

I have an Azure VM with Pentaho Data Integration installed, i'm trying to build some ETL which loads a dimensional model from the staging area, but when i start a transformation, the load speed of PDI into any azure database is painfully slow.
It is possible to have PDI working on cloud with Azure Databases? There is some configuration step needed to achieve a reasonable loading speed?
PS:
VM and databases are in the same region
There is a firewall rule to allow port acess
Reading speed is working just fine
PDI 8.1, using table output step
I've been experiencing same speed problem but I will tell you my workarounds with this.
First of all: Download and install latest jdbc driver that lets you gain connection with azure sql database, in documentation the link is here but the way I do is keep it synced from here in GitHub any of this will let you use the latest driver in PDI.
Second workaround: for large files what I've found most powerful is using BCP Utility integrated with PowerShell or Linux Batch. Doesn't care if it files are local or in azure blob storage but you might need credentials for this.
Last but not least: Use Azure Data Factory V2 to move and load files (if you are like me I try to keep it in PDI until I have to load it, the http get step will let you trigger ADF pipeline).
Good luck and let me know if you get it.

Azure Truncate table

I recently spun up a sql database on Windows Azure. I created a couple of tables and loaded some data. I then went into Sql Server Management Studio on my local machine and typed
Truncate table XXXX
And now it is clocking "Executing Query". Is truncate not supported by azure?
Tony:
I think you are right. I used the suggestion here and after resizing the db, all of the connections went away and I could truncate without a problem.
Thanks

How to shrink Azure SQL server DB (18MB of data charged for 5GB of server space already)

I have problem with Azure SQL DB.
Bacpack export of DB is only 18MB but charged DB size of server exceeds 5GB already.
Is there any way to see actual size of data?
Is there any way to move DB to simple recovery model?
Or is there any other way to shrink log files?
Or should I just drop Database and restore from backup?
Problem was caused by defragmented indexes.
You can find good scripts for fixing those from here:
http://blogs.msdn.com/b/dilkushp/archive/2013/07/28/fragmentation-in-sql-azure.aspx
After running scripts (and 24h) size of DB went back to 300MB.
The bacpac file is going to be significantly smaller than the DB as it's a compressed version of the data and I believe it strips out things like index content and only stores index definitions which are reindexed on restore so one shouldn't be indicative of the other.
For example, I have a database on SQL Azure configured as a 10GB Premium DB, which is currently using 2.7GB, which BACPACs to about 300MB
What kind of database have you configured ?
What Edition, Size and Usage settings are you currently being shown.
** Edit ** Image wasn't loading so here's the external link - http://i.snag.gy/JfsPk.jpg
The next thing to check is the size breakdown in the database by table/object.
Connect to your Azure environment with Management Studio and run the following query. which will give a table breakdown of the database with sizes in MB.
select
sys.objects.name, sum(reserved_page_count) * 8.0 / 1024
from
sys.dm_db_partition_stats, sys.objects
where
sys.dm_db_partition_stats.object_id = sys.objects.object_id
group by sys.objects.name

Sharepoint disaster recovery

What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?
You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution
Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.
A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!
I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.

Resources