Stream Analytics Query UI portal: Unable to connect to input source at the moment - azure

I followed the link below, and created an input, like below:
https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-tutorial-visualize-anomalies
The Test button in the red box confirms the connection is ok.
However, on the Query page, it shows error below:
Unable to connect to input source at the moment. Please check if the
input source is available and if it has not hit connection limits.
Below shows that the incoming messages are already in the event hub.
I tried using both Connection String and MI for the input, but am still getting the error.
I can send messages to and receive them from the event hub by following the link below:
https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send

To test your query against a specific time range of incoming events, select Select time range.
You may checkout the MS Q&A thread addressing similar issue.
For more details, refer to Test an Azure Stream Analytics job in the portal.

Related

Azure Stream Analytics Job cannot detect either input table or output table

I'm new to the Azure Stream Job, and I want to use the reference data from Azure SQL DB to load into Power BI to have streaming data.
I've set up the storage account when setting up the SQL input table. I test the output table (Power BI) which is also fine, no error.
I tested both input table and output table connection, both are successfully connected, and I can see the input data from Input preview.
But when I tried to compose the query to test it out, the query cannot detect either input table or the output table.
The output table icon also grey out.
Error message: Query must refer to as least one data stream input.
Could you help me?
Thank you!!
The test query portal will not allow you to test the query if there are syntax errors. You will need to correct the syntax (as seen by yellow squiggles) before testing.
Here is a sample test query without any syntax error messages:
Stream Analytics does require to have at least one source coming from one of these 3 streaming sources: Event Hubs, IoT Hub, or Blob/ADLS. We don't support SQL as a streaming source at this time.
Using reference data is meant to augment the stream of data.
From your scenario, I see you want to get data from SQL to Power BI directly. For this, you can actually directly connect Power BI to your SQL source.
JS (Azure Stream Analytics)

Azure iot and stream analytics issue

just wondering if i could get some help about azure iot hub and stream analytics.
I'm using this guide to build my simulated device https://learn.microsoft.com/en-us/azure/iot-hub/quickstart-send-telemetry-python
however, whenever i try to extend the json telemetry message to include more key pairs, stream analytics always gives me this error
Source '<unknown_location>' had 1 occurrences of kind 'InputDeserializerError.InvalidData' between processing times '2020-07-14T02:35:47.4125308Z' and '2020-07-14T02:35:47.4125308Z'. Could not deserialize the input event(s) from resource 'Partition: [2], Offset: [806016], SequenceNumber: [1698], DeviceId: [testdevicereal]' as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format
I've checked my json formatting and it seems fine, any clues?
Deserialization issues are caused when the input stream of your Stream Analytics job contains malformed messages. For example, a malformed message could be caused by a missing parenthesis, or brace, in a JSON object or an incorrect timestamp format in the time field.
Enable resource logs to view the details of the error and the message (payload) that caused the error. There are multiple reasons why deserialization errors can occur. For more information regarding specific deserialization errors, see Input data errors. If resource logs are not enabled, a brief notification will be available in the Azure portal.
Please see Troubleshoot input connections for more details.

Azure Monitor alert is sending false failure email notifications of failure count of 1 for Functions app

I have a Functions app where I've configured signal logic to send me an alert whenever a failure greater than or equal to one has occurred in my application. I have been getting emails everyday saying my Azure Monitor alert was triggered followed by an email later saying that the failure was resolved. I know that my app didn't fail because I checked in Application Insights. For instance, I did not have a failure today, but did have a failures the prior 2 days:
However, I did receive a failure email today. If I go to configure the signal logic where I set a static threshold of failure count greater than or equal to 1 it shows this:
Why is it showing a failure for today, when I know that isn't true from the Application Insights logs? Also, if I change the signal logic to look at total failures instead of count of failures, it looks correct:
I've decided to use the total failures metric instead, but it seems that the count functionality is broken.
Edit:
Additional screenshot:
I suggest you can use Custom log search as the signal if you have already connected your function app with Application insights(I'd like to use this kind of signal, and don't see such behavior like yours).
The steps as below:
Step 1: For signal, please select Custom log search. The screenshot is as below:
Step 2: When the azure function times out, it will throw an error and the error type is Microsoft.Azure.WebJobs.Host.FunctionTimeoutException, so you can use the query below to check if it times out or not:
exceptions
| where type == "Microsoft.Azure.WebJobs.Host.FunctionTimeoutException"
Put the above query in the "Search query" field, and configure other settings as per your need. The screenshot is as below:
Then configure other settings like action group etc. Please let me know if you still have such issue.
One thing should be noted: Some kinds of triggers support retry logic, like blogtrigger. So if it reties, you can also receive the alert email. But you can disable the retry logic as per this doc.

how to capture error json records coming from event hub to azure stream analytics

all, Can anybody help me with the solution for capturing the malformed json coming from event hub to azure stream analytics.
My use case is : I am getting json records on the event hub i.e. input is event hub -> parsing the json in Azure Stream analytics -> placing the parsed data in azure SQL DWH.
Now whenever the malformed json comes on the eventhub the ASA drops that event and parses only the correct json. I need to capture those malformed event ans report it to the Source application. Could you please let me know how to do this ?
Thanks,
Aditya
Please take a look at this article, seems that it can meet your needs.
When a Stream Analytics job receives a malformed message from an input, it drops the message and notifies you with a warning. Then follow this What caused the deserialization error to find the JSON data with the incorrect format.

Azure stream analytics - How to redirect or handle error events/rows?

Is there a way to capture and redirect data error events/rows to a separate output?
For example, say I have events coming through and for some reason there are data conversion errors. I would like to handle those errors and do something, probably a separate output for further investigation?
Currently in stream analytics error policy, if an event fails to be written to output we only got two options
Drop - which just drops the event (or)
Retry - retries writing the event until it succeeds
Collecting all error events is not supported currently. You can enable diagnostic logs and get a a sample of every kind of error at frequent intervals.
Here is the documentation link.
If there is a way for you to filter such events in the query itself, then you could redirect such events to a different output and reprocess that later.

Resources