I have connected to the following NS schema to my SQL server database and all tables replicated
Schema Browser (SuiteTalk)
Connect Browser (Netsuite.com Analytics)
Analytics Browser (Netsuite2.com Analytics)
I am trying to get the relationship between transaction and file attachment, so that I can join both table to create a list of transactions and their respective file (attachment) for each transactions. However, I haven't found any clue over 3 databases that replicated. I wonder if that is even possible to find out the table relationship between these.
Related
Error : Getting Schema Information for the database failed with the exception unable to process a schema with 2434 500
Database Sync Group error on Azure syncing local offline server.
What's the best way to sync offline database even if it's not this way
Please reference the Data Sync Limitations on service and database dimensions:
I agree with #Alberto Morillo, your exception should be "Unable to process a schema with 2434 tables, 500 is the max....".
Here's the Azure official blog talk about how to Sync SQL data in large scale using Azure SQL Data Sync. It gives you a solution to solve the exception:
Sync data between databases with many tables
Currently, data sync can only sync between databases with less than 500 tables. You can work around this limitation by creating multiple sync groups using different database users. For example, you want to sync two databases with 900 tables. First, you need to define two different users in the database where you load the sync schema from. Each user can only see 450 (or any number less than 500) tables in the database. Sync setup requires ALTER DATABASE permission which implies CONTROL permission over all tables so you will need to explicitly DENY the permissions on tables which you don’t want a specific user to see, instead of using GRANT. You can find the exact privilege needed for sync initialization in the best practice guidance. Then you can create two sync groups, one for each user. Each sync group will sync 450 tables between these two databases. Since each user can only see less than 500 tables, you will be able to load the schema and create sync groups! After the sync group is created and initialized, we recommend you follow the best practice guidance to update the user permission and make sure they have the minimum privilege for ongoing sync.
Hope this helps.
The full error message you are receiving may look like "Getting schema information for the database failed with the exception "Unable to process a schema with 2434 tables, 500 is the max. For more information, provide tracing ID '8d609598-3dsf-45ae-93v7-04ab21e45f6f' to customer support."
It is a current limitation on SQL Data Sync that if the database has more than 500 tables, it does not get the schema for you to select the tables - even if you want to select and sync only 1 table.
A workaround for you is to delete the extra tables or move the un-needed tables into another DB. Not ideal, we agree, but is a workaround for now. To try the workaround perform the following steps.
Script the tables you want to sync(less than 500 tables per synchronization group)
Create a new temporary database and run the script to create the tables you want to sync
Register and add the new temporary database as a member of the sync group
Use the new temporary database to pick the tables you want to sync
Add all other databases that you want to sync with (on-premise databases
and hub database)
Once the provisioning is done, remove the temporary database from the sync group.
This could be managed using following steps (I have used limited tables steps, as I was only targeting 400 tables for my requirement):
If you are looking for limited tables only
Create a user at your on-prem sql server, with limited tables right. i.e. GRANT SELECT ON databasename.table_name to username; (Add multiple tables which needs to be synced)
Once this is done, register this user at SQL Data Sync application at your On-Prem server
SQL Data Sync image
and select this at your Database Sync Group at Azure (Add an On-Premises Database).
Database Sync Group image
You'll be able to see all tables you granted permission at your On-Prem DB
If you are looking for all tables
Create multiple users with 500 tables each user, and create multiple sync groups using these different database users.
I have several SQL DBs in Azure. All have the same structure. Each DB represents a different location. What would be the best practice to aggreate the data of all locations? The goal would be to be able to answer queries like „How much material of type X was used in time range x to y accross all locations?“ or „Give me the location that produces the highest outcome?“
You can use the Azure SQL database Elastic pool.
Add all of your databases to the Elastic pool.
With Elastic query can help you aggreate the data of all locations in Azure.
The elastic query feature (in preview) enables you to run a Transact-SQL query that spans multiple databases in Azure SQL Database. It allows you to perform cross-database queries to access remote tables, and to connect Microsoft and third-party tools (Excel, Power BI, Tableau, etc.) to query across data tiers with multiple databases. Using this feature, you can scale out queries to large data tiers in SQL Database and visualize the results in business intelligence (BI) reports.
Hope this helps.
My recommendation in this scenario is to create a new database which we will name as the "hub" database and it will consolidate the information of all location databases which we will name as "member" databases. Use SQL Data Sync to synchronize each member database to the hub database. Use T-SQL and Power BI against the hub database to answer all your questions involving all locations.
I participated on a project of a Mexican retail store with 72 stores across Mexico they created a hub database to consolidate sales at the end of the day, and use Power BI to show consolidated sales to stakeholders.
I'm querying tables in an Azure SQL DB from Azure data lake analytics, and are experiencing ineffecient queries.
The queries are simple, all SELECT * FROM EXTERNAL Datasource EXECUTE #"SELECT *FROM externalTable. This table consist of more than 60million rows.
The challenge is that during the execute of the u-sql script retrieving all of these 60 million rows, u-sql only splits these operations down into one vertex, making it impossible to scale the job.
If I split the query into X other "part queries", where I retrieve a piece of the total rows in each part query, and then combining all part queries at the end, I obviously get X vertexes.
To demonstrate "part queries":
SELECT * FROM EXTERNAL Datasource EXECUTE #"SELECT *FROM externalTable
where registered >= GETDATE()-10000 and Registered !> GETDATE()-8000
union all
SELECT * FROM EXTERNAL Datasource EXECUTE #"SELECT *FROM externalTable where registered >= GETDATE()-7999 and Registered !> GETDATE()-6000
My question is, is this the preferred way of querying external data sources efficiently, or am I missing something?
Try placing SQL Azure database server and Data Lake resources participating on cross database queries in the same region for better performance.
You can also retrieve all those millions of records in batches by parameterizing your elastic queries on the Azure SQL Database side. Parameterized operations can be pushed to remote databases and be evaluated remotely on the Azure Data Lake side. Read more about it here.
I have two azure sql databases - Master and Secondary. Both contains same table. For ex. Product table. When I insert into or update Product table from Master DB, Product table from Secondary DB should get updated using Logic App.
We built data sync for this purpose. Check it out!
You could also use two connections and submit to both databases if, for some reason, Data Sync didn't meet your needs.
I have some structured tables to handle connections between users. I have managed to get this to work in Google App Engine(Like Azure Tables). I am considering moving to Azure and combining Tables with SQL.
Scenario:
Step 1:
SQL: Find user relations in Azure SQL Database, user A and user B is connected.
Step 2:
Search Azure Tables with user A id and user B id. Each user is uploading images and other unstructured posts.
I guess this is a well known scenario, I am just looking to combine the two for best:
Performance
Scalability
Pricing
Thanks