I have two azure sql databases - Master and Secondary. Both contains same table. For ex. Product table. When I insert into or update Product table from Master DB, Product table from Secondary DB should get updated using Logic App.
We built data sync for this purpose. Check it out!
You could also use two connections and submit to both databases if, for some reason, Data Sync didn't meet your needs.
Related
I am using an Azure SQL Database for our team's reporting and the data size right now is too big to handle by a single data (at least I think so, it has 2 fact tables with around 100m rows in each table).
The Azure SQL Database is named "operation-db" and the Synapse is named "operation-synapse".
I want to make the transition for my team become as smooth as possible. So I'm planning to copy all the tables, views, stored procedure and user-defined function over to Synapse.
Once I'm done with that, is there a way to rename "operation-synapse" to "operation-db" so the team doesn't have to go to their code base to change the name of the db?
Thanks!
It is not possible to rename a SQL Pool via SQL Server Management Studio and you will receive the following error:
ALTER DATABASE NAME statement is not supported in a Synapse workspace.
To update the name of a SQL pool, use the Azure Synapse Portal or the
Synapse REST API. (Microsoft SQL Server, Error: 49978)
The REST API however does list a move method to change names:
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}/sqlPools/{sqlPoolName}/move?api-version=2019-06-01-preview
I couldn't get it to work though. YMMV. Not renaming your db shouldn't be a big deal though. Your team should feel comfortable with changing connection strings etc and it will help them understand they are moving to a different product (Synapse) with different characteristics.
Before you move to Synapse however, have you look at Clustered Columnstore indexes in Azure SQL DB? They are default type of index in a SQL Pool database but are also available in SQL DB. They can compress your data 5-10x so it might end up not that big at all. Columnstore is great for aggregate queries but less so for point lookups so have a think about your workload before you migrate.
100 million rows is not big enough for synapse. Cci data in each shard will only have 1 row group (1mil rows).
Consider using partitioning or CCI in your sql db itself.
Also what's your usage pattern? If you are doing point lookups and updates clustered indexes will perform better.
You can rename a Synapse database easily using the SSMS GUI. (I've just tried this on v18.8).
Just click once on the database name in the Object Explorer to select it, then press the F2 key to rename it.
The Synapse service must be running (i.e. not paused) for the rename to work.
You can rename Synapse database using T-SQL. The command is as follows:
ALTER DATABASE [OldSynapseDBName]
MODIFY NAME = [NewSynapseDBName]
Note you need to be connected to/issue the command from the master database otherwise it will not work.
The command takes can 30 seconds on 100GB DB and there are some caveats such as DB must not be used during operation.
I have several SQL DBs in Azure. All have the same structure. Each DB represents a different location. What would be the best practice to aggreate the data of all locations? The goal would be to be able to answer queries like „How much material of type X was used in time range x to y accross all locations?“ or „Give me the location that produces the highest outcome?“
You can use the Azure SQL database Elastic pool.
Add all of your databases to the Elastic pool.
With Elastic query can help you aggreate the data of all locations in Azure.
The elastic query feature (in preview) enables you to run a Transact-SQL query that spans multiple databases in Azure SQL Database. It allows you to perform cross-database queries to access remote tables, and to connect Microsoft and third-party tools (Excel, Power BI, Tableau, etc.) to query across data tiers with multiple databases. Using this feature, you can scale out queries to large data tiers in SQL Database and visualize the results in business intelligence (BI) reports.
Hope this helps.
My recommendation in this scenario is to create a new database which we will name as the "hub" database and it will consolidate the information of all location databases which we will name as "member" databases. Use SQL Data Sync to synchronize each member database to the hub database. Use T-SQL and Power BI against the hub database to answer all your questions involving all locations.
I participated on a project of a Mexican retail store with 72 stores across Mexico they created a hub database to consolidate sales at the end of the day, and use Power BI to show consolidated sales to stakeholders.
If I want a daily copy/replication of my production database, I know I can copy, but what happens when the size grows to ~100 terabytes or more?
It doesn't seem logical to copy a db of that size everyday just to use for testing/QA.
Ideally I'd like a solution where -
1. just the changes (data) are copied (nightly) to the testing db, there by eliminating the overhead of copying a large db.
2. when I do push changes (column additions, keys, etc) to production then those changes get copied to the testing db as well.
Is there an Azure solution or setup for this?
just the changes (data) are copied (nightly) to the testing db, there by eliminating the overhead of copying a large db. 2. when I do push changes (column additions, keys, etc) to production then those changes get copied to the testing db as well.
Please reference the document of SQL Data Sync. SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple SQL databases and SQL Server instances.
Data Sync is based around the concept of a Sync Group. A Sync Group is a group of databases that you want to synchronize.
Data Sync uses a hub and spoke topology to synchronize data. You define one of the databases in the sync group as the Hub Database. The rest of the databases are member databases. Sync occurs only between the Hub and individual members.
You can sync the data between hub database and member datatbase manually or automatically. Please see Tutorial: Set up SQL Data Sync between Azure SQL Database and SQL Server on-premises
Hope this helps.
I have an Azure database (using SQL Database), and also a separate device that measures floats (not relevant to the question).
As and when the data is being updated, say once every 5 minutes, I wish to update the database so that a new row is being formed with this data. I then intend to connect to PowerBI using the Azure database to form graphs etc.
As mentioned in the title, what would be the best practice? I have done my due diligence and it seems that the best way would just be to update the Azure database. Or should I consider updating a CSV file, then connect the CSV file to the Azure database and update it from there?
Reason why I'm considering to go the CSV file route is because I see that Excel has in-built refresh function, but I couldn't find anything from the Azure side.
https://support.office.com/en-ie/article/refresh-an-external-data-connection-in-excel-1524175f-777a-48fc-8fc7-c8514b984440
If you want to use Excel, you can see this Azure official document: Connect Excel to a single database in Azure SQL database and create a report.
Connect Excel to a single database in Azure SQL Database and import data and create tables and charts based on values in the database. In this tutorial you will set up the connection between Excel and a database table, save the file that stores data and the connection information for Excel, and then create a pivot chart from the database values.
Then, you can use the "Refresh Data" and try the tutorial you have found.
Hope this helps.
My website is hosted on Azure, i want to implement Azure search indexing, but there are some limitations of showing data to anyone. So when i retrieve data from a table, i will check user id and all details from different userrole table and pick some data on the basis of userid,
Can i get data from different database tables using Azure search indexing? currently i am getting onlye one table data on one index?
i have to implement Azure index search, or please suggest me any other workaround for my problem.
for example
i have a table of "users","userroles", "projects" and "tasks".
i want to show tasks of projects related to user. foreign key will be used. now if i create azure index, it will only run my query on tasks table, it will not check the tasks details from projects,users, etc tables. so my question is how i can create such type of index or query in Azure search, where i use different tables to get relevant and correct data in my search.
You can add those roles on your Azure Search index, specifying which roles have access, and use Odata to filter the results.
https://.search.windows.net/indexes//docs?search=&$filter=Administrator%20eq%20true
You can learn more about filters on this documentation.