In the Azure environment, I have an Azure SQL Db and a CosmosDb Graph. Using an Azure Data Factory, I
need to insert/update data from the Sql db to the GraphDb.
My thinking is that I need to first transform the data to json and from there insert it into the GraphDb.
Is this the way to go? Are there any other ways?
Thank you.
1.Based on the ADF copy activity connector and the thread: How can we create Azure's Data Factory pipeline with Cosoms DB (with Graph API) as data sink ? mentioned by #silent,Cosmos db graph api connector is not supported in ADF so far. You could vote up this feature in this feedback link which is updated at April 12, 2019.
2.Cosmos db migration tool isn't a supported import tool for Gremlin API accounts at this time. Please see this link:https://learn.microsoft.com/en-us/azure/cosmos-db/import-data
3.You could get an idea of graph bulk executor .NET library now.This is the sample application:git clone https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started.gi
Related
I m new to Azure Data Factory. How Can I create a C# object in Azure data factory and I m not sure how we can create an SQL connection in ADF? Somebody please guide me on this?
Thanks #Peter Bons for the valuable suggestions on this.
Follow the below detailed process to achieve your requirement.
Create an Azure data factory from the Portal.
Create SQL Server & Azure Storage linked services with a self-hosted integration runtime.
From the Azure Data Factory Studio launch the Data Factory UI in new tab.
Select the SQL Server from the new dataset in pipeline.
Create New linked service (SQL Server) and test the connection, it will show you Connection Successful once we given all the required values.
Linked Service Screenshot:
References:
Copy data from a SQL Server database to Azure Blob storage by using Azure Data Factory.
Create an Azure SQL Database linked service using UI.
I am trying to transform data from Salesforce before loading it to dedicated SQL pool.
When I try to create a dataset from Synapse's Dataflow, I am not able to choose Salesforce a Data store:
Can anyone suggest how to transform data from Salesforce or any other Datasource that is not supported by Dataflow?
As per the Official Documentation, Currently Dataflows does not support Salesforce data as source or sink.
If you want, you can raise the feature request in the Synapse portal.
As an alternate, you can use Copy activity in the Azure Data factory to copy data from Salesforce to Dedicated SQL pool and then you can transform it using Dataflows in synapse from Dedicated SQL DB to Dedicated SQL DB.
Follow the below steps to achieve your requirement:
First create a Data Factory Workspace.
Select the Author hub and a create a pipeline. Now, drag the copy activity from the workspace and select the source. You can see that Salesforce is supported when you select new source dataset. Select it and create a linked service for that.
Now, select the sink dataset and click on Azure Synapse analytics.
Create a linked service for the Dedicated SQL database and select it.
Then, you can select the table in the Dedicated SQL and copy your data by running this.
After this copy, go to Synapse workspace and click on the Source of the Dataflow.
Select the Azure Synapse Analytics in source and click on continue.
Now, click on New to create linked service for the SQL DB. Give the subscription and server name and authenticate with your database.
After the creation of linked service, select it and give your table which is result of the copy in the DB.
Now, go to sink and select Azure Synapse Analytics and create another linked service for it as same above and select the resultant table in DB which you want after transform.
By following the above process, we can achieve the transformation from Salesforce data to Dedicated SQL DB.
Can anyone suggest how to transform data from Salesforce or any other Datasource that is not supported by Dataflow?
You can try this approach for the data stores which are not supported by the Data flows and please refer this to check various data stores supported by Copy activity before doing this process for the other data stores.
I am new to azure stack, so my please bear with my question.
I have a synapse database and I have a powerapp that reads data from synapse database. So, powerapps cannot writeback to synapse database. So I am thinking of developing a azure function with https trigger and integrate with the powerapp button on click functionality. So, my question is CAN AZURE FUNCTION WRITE BACK TO SYNAPSE DATABASE?
PS: I have tried the same with logic apps, with new row in sql trigger, and it didn't work.
Any help would be appreciated?
if you have a library for connecting to the synapse database that does not need to be installed but can be loaded in your application then yes Function Apps can write to the synapse DB as long as the connectivity is there
I am using Microsoft Azure Storage Explorer to move patient resource data from MsSQL to Azure cosmos DB of FHIR server. I have installed the FHIR server using the below github link.
https://github.com/Microsoft/fhir-server/blob/master/docs/DefaultDeployment.md
I am able to move MsSQL server data inside FHIR cosmos db server but data format is not matching with FHIR server apps.
Example :- I have patient data with sql server side and we want to move all data with FHIR cosmos db ("resource type ":"Patient")and query on it. FHIR sever apps/services is not able to map with MsSQL server data.
Are there any Azure functions that can be run so that the bulk
ingestion of data into the FHIR server ?(Posting data with Postman is one way which is not feasible for bulk data)
Thanks in Advance.
#Vinayaka you are on a right track with Azure function.
In a nutshell, it is a simple post/put requests of FHIR resources from MS SQL into FHIR Server endpoints.
One approach could be a simple Azure function or a console app, that loops through FHIR json resources and post them in async way.
My humble advice: bump up capacity/throughput of FHIR server before running your ingestion process/load, and downgrade it as needed once ingestion of FHIR resources is completed.
You could also reuse Microsoft FhirImporter function for your case.
You could build a Data Factory pipeline to load the data to CosmosDB in the correct format. You may need to do some transformation to get the data into the format that FHIR expects.
I have setup cosmos db gremlin api and created graphs manually by adding nodes and properties and adding edges one by one. Is there anyway to load the data once into cosmos gremlin directly in the json or csv format like we do it in cosmos db sql-api? please help me with this
Is there anyway to load the data once into cosmos gremlin directly in
the json or csv format like we do it in cosmos db sql-api?
Evidently, according to this document,the Data Migration tool isn't a supported import tool for Gremlin API accounts at this time.
However,you could consider using graph BulkExecutor .NET library to perform bulk operations in Azure Cosmos DB Gremlin API.There is sample application here.
You could use loop of generation in it or load your own json file.
Updates:
Try to find such direct solution but no luck.As i know, maybe you could adopt below solution:
First step,transfer the data from azure data lake into azure blob storage as a json file using Azure Data Factory.
Second step,still need load the json file via using .net bulk SDK.
Updates2:
#JemimaJeyakumar I viewed this link.That's the way i mentioned in my update answer:Azure Data Factory. But i'm afraid that the cosmos db graph api is not supported in the ADF.