Sql Data Sync fails and no logs are available - azure

I have 2 SQL data sync groups setup
First one does a 1 way sync from azure to local db. Sync's only 1 table.
Second one does a 1 way sync from local db to the azure. Syncs 2 tables.
After setting it up and getting a few sync errors I was able to figure out what's wrong by checking the logs and fixing the problems. It was all running fine for a while and now it broke down.
I got a sync error every time sync is attempted - however this time I'm not able to fix anything because I don't have any visibility on what's wrong - when I click on the Log tab in the Azure Management portal it just stays blank - no details come up.
I click [Logs]
Wait spinner comes up
And then it looks like this:
Another detail worth mentioning is that the error only occurs for the Second sync group that I have.
Here's what I've attempted so far
I poked around the azure management portal to try to find another way of looking at the logs - looking for something like ftp access to the log file - couldn't find anything like that.
I refreshed schema in the sync rules tab.
Used the SQL Data Sync agent tool on the premise to do the ping operation. Restarted the sql data sync windows service.
Made sure I have the latest SQL Data Sync Agent installed.
I'm wondering what are my options from here...
I'm close to trying to deleting and recreating the sync group... I'm not too comfortable doing this because I know there are some sql tables that get created in the local db to support the operation of the sync - so if I delete the sync group would that automatically drop those helper tables as well or would I have to drop those manually - If I do drop those then that will obviously brake the other sync group that works fin as well..
Any advice is appreciated.
Thanks

you can check the Event Log entries on the box where you installed the SQL Data Sync Agent.
Event Viewer->Applications and Services Logs->Sql Azure Data Sync Preview
or Event Viewer->Applications and Services Logs->Data Sync Service
or you may also try turning logging in verbose mode.
Open LocalAgentHost.exe.config in notepad. This file should be present in your installation directory.
a) Uncomment the section that is currently commented
< !--
< switches>
< add name="SyncAgentTracer" value="4" />
< /switches>
< trace autoflush="true" indentsize="4">
< listeners>
< add name="myListener" type="Microsoft.SqlAzureDataSync.ClientLogging.DSSClientTraceListener, Microsoft.SqlAzureDataSync.ClientLogging, Version=2.0.0.0" initializeData="DSSAgentOutput.log" />
< remove name="Default" />
< /listeners>
< /trace>
-->
b) Stop and restart SQL Azure Data Sync Preview Windows Service.
Now you would find the detailed logs in files named DSSAgentOutput*.log.
If you delete the sync group and the tables are part of other sync groups or you have other sync groups, the deprovisioning will not completely remove everything, just that particular sync group definition.

Related

Upload 1Gb file through Logic app using sftp-ssh to Azure File share

I am using the Logic App to upload 1 Gb file as below-
Trigger - When files are added or modified (properties only)
Action1 - Get file content
Action 2- Create file(Azure fileshare)
Till 35 MB all triggers and actions works fine. After the file uploaded in SFTP crosses 40 MB, SFTP-SSH trigger and action all works fine. But while the workflow moves to the second action - 'Create File': it fails with the below error The specified resource may be in use by an SMB client'. When I see the Azure file share storage account, I see filename.partial.lock getting created. I modified the access policy as well, but the issue persists.
Logic apps are not designed to upload or download the large amount data from source/destination, its a workflow solution which you can design to provide solution your business need, however you can still use chunk upload functionality in logic-app to upload or download large file via logic app.
please refer
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-handle-large-messages#set-up-chunking
To upload large file, make sure enable the Allow chunking.
From your description, suppose it should be SharingViolation, you could check the error codes here.
And in the official doc, there are two scenario to get the Sharing Violation error:
Sharing Violation Due to File Access
Client A opens the file with FileAccess.Write and FileShare.Read
(denies subsequent Write/Deletewhile open).
Client B then opens the file with FileAccess.Write with
FileShare.Write (denies subsequent Read/Delete while open).
Result: Client B encounters a sharing violation since it specified a
file access that is denied by the share mode specified previously by
Client A.
Sharing Violation Due to Share Mode
Client A opens the file with FileAccess.Write and FileShare.Write
(denies subsequent Read/Delete while open).
Client B then opens the file with FileAccess.Write with
FileShare.Read (denies subsequent Write/Delete while open).
Result: Client B encounters a sharing violation since it specified a
share mode that denies write access to a file that is still open for
write access.
These are scenario you need to consider, another choice you could try to use the REST API to upload the file and in the HTTP action set the Allow chunking.
I think one more way to resolve the issue is redesign which could be more scale-able solution:
Use logic app to get notification if file is added or modified on SFTP/FTP location.
once the file is added read the file path for that file.
Create Service bus message and send the file Path to Service Bus message as message content.
Create Service Bus queue message trigger Azure function which listen to those message (created in step 2)
Azure function will read the Chunk of files from SFTP using the file path.
this way you can read or write more then 30 GB file.
this solution will be more scale able solution as azure function and auto scale on demand.
Thank you all. The reason for the error 'The specified resource may be in use by an SMB client' is due to the mounting of the file share with two Linux Virtual machines. We unmounted the Linux VMs and did fresh single mounting. The error got fixed with that.
MSFT has confirmed in our discussion that "Create File share" has a limitation of 100 or 300 MB. It was just able to work with SFTP as data arrives in chunk. MSFT is working further to give proper error statement, when the file size is beyond 100 or 300 MB. Below is quoted from MSFT email-
"Thanks for the details , actually the Product team confirm to me that your flow is working by luck , it should not work with this sizes an they are working to implement the limits correctly to prevent files larger than maximum size which is maybe 300 or 100 “I am not yet sure ”
And this strange behavior is only happing when we are reading the content from SFTP chucked.
"

Archiving Azure Search Service

Need suggestion on archiving unused data from search service and reload it back when needed(reload to be done later).
Initial design draft looks like this:
Find the keys from search service based on some conditions(like take inactive, how old) that need to be archived.
Run achiever job(need suggestion here, could be a web job, function app)
Fetch the data and insert to blob storage and delete it from the search service.
Now the real way is to run the job in the pool and should be asynchronous
There's no right / wrong answer for this question. What you need to do is perform batch queries (up to 1000 docs), and schedule it to archive past data (eg. run an Azure function which will trigger and search for docs where createdDate > DataTime.Now).
Then persist that data somewhere (can be a cosmos db or as blob into storage account). Once you need to upload it again, I would consider it as a new insert, so it should follow your current insert process.
You can also take a look on this tool which helps to copy data from your index pretty quick:
https://github.com/liamca/azure-search-backup-restore

Is there a way to remove blob storage credential from azure database to allow bacpac local restore?

I am trying to export a bacpac from Azure and restore it locally on SQLEXPRESS 2016. When I try to restore it though I get the following errors from the Import Data-tier Application wizard in SSMS:
Could not import package.
Warning SQL72012: The object
[TestBacPacDB_Data] exists in the target, but it will not be dropped
even though you selected the 'Generate drop statements for objects
that are in the target database but that are not in the source' check
box.
Warning SQL72012: The object [TestBacPacDB_Log] exists in the
target, but it will not be dropped even though you selected the
'Generate drop statements for objects that are in the target database
but that are not in the source'
Error SQL72014: .Net
SqlClient Data Provider: Msg 33161, Level 15, State 1, Line 1 Database
master keys without password are not supported in this version of SQL
Server. Error SQL72045: Script execution error. The executed script:
CREATE MASTER KEY;
After some digging I found that a credential and master key have been added to the database. The credential name references a blob storage container, so I'm thinking maybe auditing was set up at some point with the container as an external resource or something similar.
I would like to delete this credential so I can restore the database locally, but the database throws an error stating that it is in use. I've tried disabling the logging in Azure, but the credential still can't be deleted.
I know sometimes it takes time for Azure to shut down resources, so maybe that's the cause, but I was wondering if anyone else has had a similar problem.
I'm trying to avoid having to set a password for the master key, since I don't care about the credential locally as in this question: SSMS 2016 Error Importing Azure SQL v12 bacpac: master keys without password not supported
Ultimately, we ended up creating a master key. In order to restore our databases locally in this way, we create the database by hand first in SSMS, then add a master key to it. This allows the data import to work correctly.
I had exactly the same problem, and tried a myriad of potential fixes found all over the place. Most were relating to rekeying the system, making a copy first, etc... and absolutely nothing worked.
As insane as this is, the only way I could finally get around it was manually editing the internal structure:
Take the bacpac from original source or copy, anywhere
Rename to .zip, and uncompress the folder structure
Edit "model.xml", search for anything to do with "master key" and / or "shared access signature" and delete the corresponding nodes
Calculate the Sha-256 checksum for the now modified model.xml
Replace the checksum at the bottom of "Origin.xml"
Rezip all the files and rename back to xxx.bacpac
Import onto a local system as you normally would

Retrieving to-be-pushed entries in IMobileServiceSyncTable while offline

Our mobile client app uses IMobileServiceSyncTable for data storage and handling syncing between the client and the server.
A behavior we've seen is that, by default, you can't retrieve the entry added to the table when the client is offline. Only when the client table is synced with the server (we do an explicit PushAsync then a PullAsync) can the said entries be retrieved.
Anyone knows of a way to change this behavior so that the mobile client can retrieve the entries added while offline?
Our current solution:
Check if the new entry was pushed to the server
If not, save the entry to a separate local table
When showing the list for the table, we pull from both tables: sync table and regular local table.
Compare the entries from the regular local table to the entries from the sync table for duplicates.
Remove duplicates
Join the lists, order, and show to the user.
Thanks!
This should definitely not be happening (and it isn't in my simple tests). I suspect there is a problem with the Id field - perhaps you are generating it and there are conflicts?
If you can open a GitHub Issue on https://github.com/azure/azure-mobile-apps-net-client/issues and share some of your code (via a test repository), we can perhaps debug further.
One idea - rather than let the server generate an Id, generate an Id using Guid.NewGuid().ToString(). The server will then accept this as a new Id.

Get Schema error when making Data sync in Azure

I finished setup for the making Azure hub and installing Client Agent and Database .
Then define dataset.
That time whatever database i chose and click get latest schema, got the error.
Error is
The get schema request is either taking a long time or has failed.
When check log ,it said like below :
Getting schema information for the database failed with the exception "There is already an open DataReader associated with this Command which must be closed first.
For more information, provide
tracing id ‘xxxx’ to customer support.
Any idea for this?
the current release has maximum of 500 tables in sync group. also, the drop down for the tables list is restricted to this same limit.
here's a quick workaround:
script the tables you want to sync
create a new temporary database and run the script to create the tables you want to sync
register and add the new temporary database as a member of the sync group
use the new temporary database to pick the tables you want to sync
add all other databases that you want to sync with (on-premise databases and hub database)
once the provisioning is done, remove the temporary database from the sync group.

Resources