I need to save a output from a Kusto query on monitoring logs into a database table but I am unable to find a way to do it. I am presuming there will be a way to get the output from a Kusto query and save it to storage then pull that data into a table using a pipeline.
Any suggestions welcome
I have reproduced in my environment and got expected results as below:
Firstly, I have executed below Kusto query and exported it into csv file into local machine:
AzureActivity
| project OperationName,Level,ActivityStatus
And then uploaded the csv file from local machine into my Blob storage account as below:
Now I created an ADF service
now create new pipeline in it and take Copy activity in that pipeline.
Then I created linked service for blob storage as source and linked service for SQL database as sink.
In source dataset I gave the Blob file
In source dataset I gave SQL server table
Copy activity sink settings set table options as Auto create table
Output In SQL Query editor:
So what we do not is we have created a logic application that runs the query real-time and returns the data via http then we save that to the table. No manual intervention
Related
I'm struggling with calling the stored procedure (create in SSMS, Azure serverless pools) in ADF and sink to AZURE database.
I have a copy data activity, my Source dataset is linked to my synapse analytics serverless pools:
My Sink is connected to an Azure SQL database parameter in the Sink coming from the Azure SQL database Dataset.
This is where I want to write output from the stored procedure. Problem is that I could not figure out how I could TRUNCATE TABLE in Pre-copy-Script.
There is a text box for pre-copy script on your 2nd screen dump.
Just add TRUNCATE TABLE YOURTABLENAME in the box. Replacing YOURTABLENAME with the actual name of your table.
So, I have some parquet files stored in azure devops containers inside a blob storaged. I used data factory pipeline with the "copy data" connector to extract the data from a on premise Oracle database.
I'm using Synapse Analytics to do a sql query that uses some of the parquet files store in the blob container and I want to save the results of the query in another blob. Which Synapse connector can I use the make this happen? To do the query I'm using the "develop" menu inside Synapse Analytics.
To persist the results of a serverless SQL query, use CREATE EXTERNAL TABLE AS SELECT (CETAS). This will create a physical copy of the result data in your storage account as a collection of files in a folder, so you cannot specify how many files nor the naming scheme of the files.
In my project I receive data from Azure IoThub and want to send it to a SQL database using Azure stream analytics. I'm trying to achieve this using the following query:
SELECT
IoTDataArrayElement.ArrayValue.sProjectID AS id
INTO
[test-machine]
FROM
[iothub-input] AS e
CROSS APPLY GetArrayElements(e.iotdata) AS IoTDataArrayElement
HAVING IoTDataArrayElement.ArrayValue IS NOT NULL
When I run the query in the environment provided by stream analytics and press test query I get the expected output which is a projectID. But when I start the stream analytics job the data doesn't go in to my database table. The table has 1 column 'id'.
When I try to send all the data to a blob storage the stream analytics job works.
Can someone please explain to me why the query I use for sending the data to a database doesn't actually send the data to a database?
Couple of things you need to verify to make successfully configuration of Azure SQL DB as output:
Make sure firewall settings is ON for All Azure Services.
Make sure you have configured the output to the sql database with the correct properties defined.
The following table lists the property names and their description for creating a SQL Database output.
Make sure the table schema must exactly match the fields and their
types in your job's output.
Hope this helps.
I have created a pipeline in Azure data factory (V1). I have a copy pipeline, that has an AzureSqlTable data set on input and AzureBlob data set as output. The AzureSqlTable data set that I use as input, is created as output of another pipeline. In this pipeline I launch a procedure that copies one table entry to blob csv file.
I get the following error when launching pipeline:
Copy activity encountered a user error: ErrorCode=UserErrorTabularCopyBehaviorNotSupported,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=CopyBehavior property is not supported if the source is tabular data source.,Source=Microsoft.DataTransfer.ClientLibrary,'.
How can I solve this?
According to the error information, it indicateds that it is not supported action for Azure data factory, but if use Azure sql table as input and Azure blob data as output it should be supported by Azure data factory.
I also do a demo test it with Azure portal. You also could follow the detail steps to do that.
1.Click the copy data from Azure portal.
2.Set copy properties.
3.Select the source
4.Select the destination data store
5.Complete the deployment
6.Check the result from azure and storage.
Update:
If we want to use the existing dataset we could choose [From Existing Conections], for more information please refer to the screenshot.
Update2:
For Data Factory(v1) copy activity settings it just supports to use existing Azure blob storage/Azure Data Lake Store Dataset. More detail information please refer to this link.
If using Data Factory(V2) is acceptable, we could using existing azure sql dataset.
So, actually, if we don't use this awful "Copy data (PREVIEW)" action and we actually add an activity to existing pipeline and not a new pipeline - everything works. So the solution is to add a copy activity manually into an existing pipeline.
I am using Blob Storage as an Input (JSON file). I have tested the Query, the Query seems fine. I have specified Output as an Azure SQL Server Table. I can connect to this database from SSMS and query the table.
The Job status is running but I don't see any data loaded into the SQL table. I have checked Azure Management Services the status is Running there are no errors. How do I diagnose what is going on?
Note: I have left Blob storage path prefix as empty. I would like it to grab any file that comes into the storage container and not some specific files.
Have you created a query? You first need to create a query and then start your stream analytics.
Query Example:
SELECT
*
INTO
Output
FROM
Input
You can also create an output to PowerBI and run the SA. This will show you if the data schema and the query is right. If everything goes well, you should be able to see the JSON files as a dataset with the name values listed. You can create a mini dashboard for just the count of items received so you can in real time see if its loading and processing the JSONs from the BLOB.
If it fails, under operation logs to PBi output, it will tell you the data schema isn't supported.
Hope this helps!
Mert