Get Schema error when making Data sync in Azure - azure

I finished setup for the making Azure hub and installing Client Agent and Database .
Then define dataset.
That time whatever database i chose and click get latest schema, got the error.
Error is
The get schema request is either taking a long time or has failed.
When check log ,it said like below :
Getting schema information for the database failed with the exception "There is already an open DataReader associated with this Command which must be closed first.
For more information, provide
tracing id ‘xxxx’ to customer support.
Any idea for this?

the current release has maximum of 500 tables in sync group. also, the drop down for the tables list is restricted to this same limit.
here's a quick workaround:
script the tables you want to sync
create a new temporary database and run the script to create the tables you want to sync
register and add the new temporary database as a member of the sync group
use the new temporary database to pick the tables you want to sync
add all other databases that you want to sync with (on-premise databases and hub database)
once the provisioning is done, remove the temporary database from the sync group.

Related

Logic App to push data from Cosmosdb into CRM and perform an update

I have created a logic app with the goal of pulling data from a container within cosmosdb (with a query), looping over the results and then pushing this data into CRM (or Common Data Service). When the data is pushed to CRM, an ID will be generated. I wish to then update cosmosdb with this new ID. Here is what I have so far:
This next step is querying for the data within our cosmosdb database and selecting all IDS with a length that is greater than 15. (This tells us that the ID is not yet within the CRM database)
Then we loop over the results and push this into CRM (Dynamics365 or the Common Data Service)
Dilemma: The first part of this process appears to be correct, however, I want to make sure that I am on the right track with this. Furthermore, once the data is successfully pushed to CRM, CRM automatically generates an ID for each record. How would I then update cosmosDB with the newly generated IDs?
Any suggestion is appreciated
Thanks
I see a red flag in your approach here with this query with length(c.id) > 15. This is not something I would do. I don't know how big your database is going to be but generally not very performant to do high volumes of cross partition queries, especially if the database is going to keep growing.
Cosmos DB already provides an awesome streaming capability so rather than doing this in a batch I would use Change Feed and use that to accomplish whatever your doing here in your Logic App. This will likely give you better control of the process and likely allow you to get the id back out of your CRM app to insert back into Cosmos DB.
Because you will be writing back to Cosmos DB, you will need a flag to ignore the update in Change Feed when the item is updated.

Syncing Problems with Xamarin Forms and Azure Easy Tables

I've been working on a Xamarin.Forms application in Visual Studio using Azure for the backend for a while now, and I've come across a really strange issue.
Please note, that I am following the methods mentioned in this blog
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution. What I mean by that is that if I create another solution that accesses the exact same backend, it can perform it's own create/sync data, but will not bring over the data generated by the other solution, even though they both seem to have the exact same access. This appears to be some kind of a security feature/issue, but I can't quite make sense of it.
Has anyone else encountered this at all? Was there a work-around at all? This could potentially cause problems down the road if I were to ever want to create another solution that accesses the same system/data for whatever reason.
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution.
According to your provided tutorial, I found that the related PullAsync is using Incremental Sync.
await coffeeTable.PullAsync("allCoffees", coffeeTable.CreateQuery());
Incremental Sync:
the first parameter to the pull operation is a query name that is used only on the client. If you use a non-null query name, the Azure Mobile SDK performs an incremental sync. Each time a pull operation returns a set of results, the latest updatedAt timestamp from that result set is stored in the SDK local system tables. Subsequent pull operations retrieve only records after that timestamp.
Here is my test, you could refer to it for a better understanding of Incremental Sync:
Client : await todoTable.PullAsync("todoItems-02", todoTable.CreateQuery());
The client SDK would check if there has a record with the id equals deltaToken|{table-name}|{query-id} from the __config table of your SQLite local store.
If there has no record, then the SDK would send a request as following for pulling your records:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Note: the $filter would be set as (updatedAt ge datetimeoffset'1970-01-01T00:00:00.0000000+00:00')
While there has a record, then the SDK would pick up the value as the latest updatedAt timestamp and send the request as follows:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'2017-06-26T02%3A44%3A25.3940000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Per my understanding, if you handle the same logical query with the same query id (non-null) in different mobile client, you need to make sure the local db is newly created by each client. Also, if you want to opt out of incremental sync, pass null as the query ID. In this case, all records are retrieved on every call to PullAsync, which is potentially inefficient. For more details, you could refer to How offline synchronization works.
Additionally, you could leverage fiddler for capturing the network traces when you invoke the PullAsync, in order to troubleshoot your issue.

Is there a way to remove blob storage credential from azure database to allow bacpac local restore?

I am trying to export a bacpac from Azure and restore it locally on SQLEXPRESS 2016. When I try to restore it though I get the following errors from the Import Data-tier Application wizard in SSMS:
Could not import package.
Warning SQL72012: The object
[TestBacPacDB_Data] exists in the target, but it will not be dropped
even though you selected the 'Generate drop statements for objects
that are in the target database but that are not in the source' check
box.
Warning SQL72012: The object [TestBacPacDB_Log] exists in the
target, but it will not be dropped even though you selected the
'Generate drop statements for objects that are in the target database
but that are not in the source'
Error SQL72014: .Net
SqlClient Data Provider: Msg 33161, Level 15, State 1, Line 1 Database
master keys without password are not supported in this version of SQL
Server. Error SQL72045: Script execution error. The executed script:
CREATE MASTER KEY;
After some digging I found that a credential and master key have been added to the database. The credential name references a blob storage container, so I'm thinking maybe auditing was set up at some point with the container as an external resource or something similar.
I would like to delete this credential so I can restore the database locally, but the database throws an error stating that it is in use. I've tried disabling the logging in Azure, but the credential still can't be deleted.
I know sometimes it takes time for Azure to shut down resources, so maybe that's the cause, but I was wondering if anyone else has had a similar problem.
I'm trying to avoid having to set a password for the master key, since I don't care about the credential locally as in this question: SSMS 2016 Error Importing Azure SQL v12 bacpac: master keys without password not supported
Ultimately, we ended up creating a master key. In order to restore our databases locally in this way, we create the database by hand first in SSMS, then add a master key to it. This allows the data import to work correctly.
I had exactly the same problem, and tried a myriad of potential fixes found all over the place. Most were relating to rekeying the system, making a copy first, etc... and absolutely nothing worked.
As insane as this is, the only way I could finally get around it was manually editing the internal structure:
Take the bacpac from original source or copy, anywhere
Rename to .zip, and uncompress the folder structure
Edit "model.xml", search for anything to do with "master key" and / or "shared access signature" and delete the corresponding nodes
Calculate the Sha-256 checksum for the now modified model.xml
Replace the checksum at the bottom of "Origin.xml"
Rezip all the files and rename back to xxx.bacpac
Import onto a local system as you normally would

Issue while creating user for a specific database through code in Azure

I am creating a copy of database in Azure through c# code.
Code for creating database:
CREATE DATABASE ABC AS COPY OF DEF
Then I want to create a user in that database so that only that user can access the database. This code executes as soon as the database is created. but while creating a user I get an error:
"failed to update database because the database is read only".
If I stop the execution for 15-20 seconds, then start, it works perfectly, but I don't want to do that.
Can I get some status that the database is created and you can proceed.
Any help would be greatly appreciated.
It appears that you're connecting to your database and executing T-SQL, you may have to use a query against sys.dm_operation_status and find your Create Database command and whether it has completed. There may be an associated REST API if you choose to program this through REST calls, there is a Get Create or Update Server Status call which might fit your scenario.
You will find that the new database will take some time to create and you won't exit that logic instantly in either approach.

Retrieving to-be-pushed entries in IMobileServiceSyncTable while offline

Our mobile client app uses IMobileServiceSyncTable for data storage and handling syncing between the client and the server.
A behavior we've seen is that, by default, you can't retrieve the entry added to the table when the client is offline. Only when the client table is synced with the server (we do an explicit PushAsync then a PullAsync) can the said entries be retrieved.
Anyone knows of a way to change this behavior so that the mobile client can retrieve the entries added while offline?
Our current solution:
Check if the new entry was pushed to the server
If not, save the entry to a separate local table
When showing the list for the table, we pull from both tables: sync table and regular local table.
Compare the entries from the regular local table to the entries from the sync table for duplicates.
Remove duplicates
Join the lists, order, and show to the user.
Thanks!
This should definitely not be happening (and it isn't in my simple tests). I suspect there is a problem with the Id field - perhaps you are generating it and there are conflicts?
If you can open a GitHub Issue on https://github.com/azure/azure-mobile-apps-net-client/issues and share some of your code (via a test repository), we can perhaps debug further.
One idea - rather than let the server generate an Id, generate an Id using Guid.NewGuid().ToString(). The server will then accept this as a new Id.

Resources