How can I enable replication in snowflake database using terraform? - azure

How can I enable replication in snowflake database using terraform ?
I am using chanzuckerberg snowfalke provider in terraform and was able to create the DB/SCHEMA/Warehouse/Tables/shares but I am not able to find the option to enable the database replication through terraform.
or Is there a way to enable run alter command on snowflake DB to enable the replication using terraform.
like :- alter database ${var.var_database_name} enable replication to accounts ${var.var_account_name};

You first have to enable replication for your organization. There are two ways to do this. One option is to open a case with Snowflake Support. Another option is to use the Organizations feature. If you do not have the Organization feature enabled, you will need to open a case with Support to enable it. If you do have the Organizations feature, you will have at least one person in your organization who has the ORGADMIN role.
That person will need to follow these steps: https://docs.snowflake.com/en/user-guide/database-replication-config.html#prerequisite-enable-replication-for-your-accounts
Once the accounts are enabled for replication, you can use SQL statements (orchestrated from Terraform or elsewhere) to promote a local database to the primary in a replication group: https://docs.snowflake.com/en/user-guide/database-replication-config.html#promoting-a-local-database
After you then get secondary databases, you can run a single line of SQL on the secondary DB's account to initiate replication: alter database <DB_NAME> refresh;

Related

How do I create database users for Cassandra on CosmosDB?

I want to be able to create multiple users on my Cassandra app such that a user can only access specific databases. I've tried following the official docs of DataStax to create users. However, cosmosdB doesn't let me add users or even LIST USERS as it gives a No Host Available error. I've tried altering the system_auth keyspace to change the replication strategy as mentioned in this thread but to no avail. Gives me message="system_auth keyspace is not user-modifiable."
How can I use the Database Users feature of Cassandra on CosmosDB?
What alternatives do I have to create logins such that a user is only able to access a specific keyspace
Azure Cosmos DB for Apache Cassandra (the product) is not a true Cassandra cluster. Instead, Cosmos DB provides an API that is CQL compliant so you can connect to Cosmos DB using Cassandra drivers.
There are limits to what Cosmos DB's API can provide and there are several Cassandra features which are not supported which include (but not limited to) the following CQL commands:
CREATE ROLE
CREATE USER
LIST ROLES
LIST USERS
From what I understand, Cosmos DB uses role-based access control (RBAC) and you'll need to provision access to your DB using the Azure portal.
Cosmos DB also does not provide support for Cassandra permissions so you will not be able to grant granular permissions in the same way that you would in a traditional Cassandra cluster.
For more information, see Apache Cassandra features supported by Azure Cosmos DB. Cheers!

Data Migration from Snowflake (on GCP Instance) to Snowflake (Azure Instance)

I am looking for some inputs on how to do a GCP cloud to AZURE cloud data migration.
Scenario -
I have a snowflake instance configured on GCP cloud (multiple databases holding legacy data) and I have another snowflake instance configured on Azure Cloud (DWH created on this instance).
I want to move/copy the data of all the databases (including all child objects - schema, table, views etc) sitting on GCP snowflake instance to snowflake instance configured on Azure Cloud.
Can you please guide me on what can be the best solution for such data migration and any steps or documentation link would be really helpful.
Many thanks - Minti
Please check the Database replication mechanism which can be used as a migration tool for SF account from 1 cloud platform to another. https://docs.snowflake.com/en/user-guide/database-replication-intro.html
Not something I've done before to be honest but if you didn't want to use external tools one possible method would be to secure share your GCP databases with your Azure Snowflake account.
You then might be able to create a new database that is a clone of this share (not sure if this is possible).
Most objects get cloned apart from stages and pipes but tables, views etc should carry over
This is a pretty easy process with a couple of prerequisites.
Make sure you have Organizations enabled on your GCP account.
This feature allows you to self-provision Snowflake accounts on any cloud provider/region. Open a support case to enable it.
Introduction to Organizations
Create a new account on Azure if you haven't already.
Enable Replication on both accounts
This can be done when logged into the account with the ORGADMIN role
Replicate your databases
Note: this will work for having a replica of the databases in the GCP Snowflake account databases in your Azure Snowflake account. If you want to permanently migrate your databases you need to set up Failover/Failback. This is a Business Critical feature, but Snowflake support will enable it for lower editions until you can complete your migration, at which point they will disable it.
Replicating a Database to Another Account
There are two options
You could make use of the replication feature
High level Steps include the below
a. Target account to be created - Can use the Organizations feature available in Snowflake(Enabled by Snowflake Support upon request)
b. Account level objects should be created manually in the target account
Note: The Failover feature is supported for the accounts whose edition is Business-critical and above. However, for account migration scenarios, this feature will be enabled for a temporary period by the Snowflake Support.
c. Replication - the below links can be referenced for a complete understanding of the process.
https://docs.snowflake.com/en/user-guide/database-replication-intro.html#introduction-to-database-replication-across-multiple-accounts
https://docs.snowflake.com/en/user-guide/database-replication-config.html#replicating-a-database-to-another-account
https://docs.snowflake.com/en/user-guide/database-failover-config.html#failing-over-databases-across-multiple-accounts
Please find the link below to have an overview on the costs associated
https://docs.snowflake.com/en/user-guide/database-replication-billing.html#understanding-billing-for-database-replication
Limitations
https://docs.snowflake.com/en/user-guide/database-replication-intro.html#current-limitations-of-replication
One other option is to create the target account and use the unloading and loading feature
https://docs.snowflake.com/en/user-guide-data-unload.html
https://docs.snowflake.com/en/user-guide-data-load.html

How we connect read only intent in azure sql manage instance?

I want to connect a Read-Only replica for reporting purpose in Azure SQL managed instance. For this, I tried to add an ApplicationIntent=ReadOnly parameter.
But it is still it not connecting the Read-Only replica. So is there any configuration required to connect Read-Only replica in Azure MI?
Azure SQLMI Business critical service tier has Built-in additional read-only database replica that can be used for reporting and other read-only workloads similar to azure sql database premium tier. It is enabled by default no user action required. You need to use 'ApplicationIntent=ReadOnly' flag to access internal replica.
Azure support the replica feature , please see this tutorial Configure replication in an Azure SQL Database managed instance database.
But for ApplicationIntent=ReadOnly, it's not supported or applies to in Azure SQL managed instance.
You can reference this document: SQL Server Native Client Support for High Availability, Disaster Recovery.
Hope this helps.

Does Azure Postgresql support table view replication?

I want to use replication only for table view which I have defined.
However, looking at online document , this seems to be not possible since server name and location is available for parameters.
Is it possible to only replicate table view using replication for Azure DB for PostgresQL?
No, it doesn't.
Azure PostgreSQL as a solution provides a managed database-as-a-service. Consequently, the solution addresses concerns with enablement at the infrastructure level. Therefore, a replication is only applicable to the server.

Data Migration from AWS RDS to Azure SQL Data Warehouse

I have my application's database running in AWS RDS (postgresql). I need to migrate the data from AWS to Azure SQL Data Warehouse.
This is a kind of ETL process and I need to do some calculations/computations/aggregations on the Data from Postgresql and put it in a different schema in Azure SQL Data Warehouse for reporting purpose.
Also, I need to sync the data on a regular basis without duplication.
I am new to this Data Migration concept and kindly let me know what are the best possible ways to achieve this task?
Thanks!!!
Azure datafactory is the option for you. It is a cloud data integration service, to compose data storage, movement, and processing services into automated data pipelines.
Please find the Postgresql connector below.
https://learn.microsoft.com/en-us/azure/data-factory/data-factory-onprem-postgresql-connector
On the transform part you may have to put in some custom intermediate steps to do the data massaging.
Have you tried the Azure datafactory suggestion?
Did it solve your issue?
If not, you can try using Alooma. This solution can replicate PostgreSQL database hosted on Amazon RDS to Azure SQL data warehouse in near real time. (https://www.alooma.com/integrations/postgresql/)
Follow this steps to migrate from RDS to Azure SQL:
Verify your host configuration
On the RDS dashboard under Parameter Groups, navigate to the group that's associated with your instance.
Verify that hot_standby and hot_standby_feedback are set to 1.
Verify that max_standby_archive_delay and max_standby_streaming_delay are greater than 0 (we recommend 30000).
If any of the parameter values need to be changed, click Edit Parameters.
Connect to Alooma
You can connect via SSH server (https://support.alooma.com/hc/en-us/articles/214021869-Connecting-to-an-input-via-SSH) or to to whitelist access to Alooma's IP addresses.
52.35.19.31/32
52.88.52.130/32
52.26.47.1/32
52.24.172.83/32
Add and name your PostreSQL input from the Plumbing screen and enter the following details:
Hostname or IP address of the PostgreSQL server (default port is 5432)
User name and Password
Database name
Choose the replication method you'd like to use for PostgreSQL database replication
For full dump/load replication, provide:
A space- or comma-separated list of the names of the tables you want to replicate.
The frequency at which you'd like to replicate your tables. The more frequent, the more fresh your data will be, but the more load it puts on your PostgreSQL database.
For incremental dump/load replication, provide:
A table/update indicator column pairs for each table you want to replicate.
Don't have an update indicator column? Let us know! We can still make incremental load work for you.
Keep the mapping mode to the default of OneClick if you'd like Alooma to automatically map all PostgreSQL tables exactly to your target data warehouse. Otherwise, they'll have to be mapped manually from the Mapper screen.

Resources