Azure Stream Analytics - no output events - azure

I have a problem with azure stream analytics job. Job monitor shows incoming input events (from Event Hub) but there are no output events or errors. Job is really simple, just to write every input to azure blob storage:
SELECT * FROM input
Any suggestions what could be wrong?
Update!
It was a bug in Azure Stream Analytics and it's already solved by Microsoft.

Did you try to include INTO clause?
SELECT
*
INTO
[output]
FROM
[input]

Since you have verified that events are coming into the system, it's likely that the job is encountering an error during processing or writing to output. Make sure that your input fields are in the set of supported data types and use a CAST statement if they aren't. To hone in on the root cause, you may also want to pick a field or two to project instead of using a SELECT *.
You mentioned that there aren't any errors but make sure to check the following sources of troubleshooting/diagnostic information:
Top-level status of your job (Processing, Degraded, etc.). Definition for each status is here: http://azure.microsoft.com/en-us/documentation/articles/stream-analytics-developer-guide/
Use the "Test Connection" button on your inputs and outputs to verify connectivity
Check the "Diagnosis" value for your inputs and outputs and click the name of the input/output for more detail, if applicable
Look in the Operations Logs for any Warnings or Errors

Related

Enqueueing a message to Azure Storage in an Azure function without changing the output

I have a custom handler written in Go running as an Azure Function. It has an endpoint with two methods:
POST /entities
PUT /entities
It was easy to make my application run as an Azure function: I added "enableForwardingHttpRequest": true to host.json, and it just works.
What I need to achieve: life happened and now I need to enqueue a message when my entities change, so it will trigger another function that uses a queueTrigger to perform some async stuff.
What I tried: The only way I found so far was to disable enableForwardingHttpRequest, and change all my endpoints to accept the Azure Function's raw JSON input and output, and then output a message in one of the fields (as documented here.
It sounds like a huge change to perform something simple... Is there a way I can enqueue a message without having to change all the way my application handles requests?
As per this Github document as of now custom handlers related to go lang in Azure functions having a bug and which need to be fixed.

Azure Synapse: How to use Web Activity to Query REST API Pipeline Data?

How can I use Synapse's Web Activity to query a pipeline run? In particular, I want to extract the error message in case of failure.
The initial setup is as follows, following the GET request posted by documentation: https://learn.microsoft.com/en-us/rest/api/synapse/data-plane/pipeline-run/get-pipeline-run#clouderror
https://i.stack.imgur.com/Ud14y.png
To get the RunId of the pipeline, I simply use this code: activity('Execute Pipeline1').output.pipelineRunId
When I inspect what was sent in the GET request, below, I see that it has indeed extracted a pipelineRunId, but not the one listed in the debug panel below.
https://i.stack.imgur.com/HaSHM.png
I suspect this is the issue, but how can I get the pipeline Run Id for the exact run that was ran and that is shown below?
Edit
Adding in pipeline Run IDs that I can query; but I cannot query the pipeline that was just ran.
https://i.stack.imgur.com/UcyO4.png
I believe there is a confusion between the pipeline runID and activity runID.
Inorder to get the child pipeline runID you will have to use this dynamic expression - #activity('ExecutePipelineParent').output.pipelineRunId
As per the screenshots you have shared, seems like you are passing the correct child pipeline runID to the web activity.
I don't see any issue with extracting the pipeline runID. Incase if your web activity is failing, then you will have to go through the specific error message related to Web activity configuration to figure out the root cause.
Your pics show that your pipeline has not been published. Possible it is reading the pipelineId from the last published version?
Also wouldn't it be better to use the System variable #{pipeline().RunId}?

Creating custom metric descriptors continually results in HTTP 500

I think I've broken my project's custom metrics.
Earlier yesterday, I was playing around with the cloud monitoring api, and I created a metric descriptor and added some time series data to it using the latest python3 cloud monitoring library create_time_series call. Satisfied with the results, I deleted the descriptor using the library, which threw an error as I had incorrectly passed in the descriptor's name. I called it again with the correct name, and it succeeded, but now every call to create_time_series on this project fails with an HTTP 500. The error message included simply says to "Try again in a few seconds," which I have, to no avail.
I have verified that I can create time series data on other projects of mine, and it works as expected. The API Explorer available in google's API documentation for metrics also gets an HTTP 500 back on calls to this project, but works fine on others. CURLing requests yields the same results.
My suspicion is that I erroneously deleted the custom.googleapis.com endpoint in its entirety, and that is why I am unable to create new metric descriptors/time series data. Is there a way to view the state of this endpoint, or recreate it?
It is impossible to to delete the data stored in your Google Cloud project but deleting the metric descriptor renders the data inaccessible. Also, according to data retention policy, there is a deletion of this data when it expires.
To delete your custom metric descriptor, call the metricDescriptors.delete method. You can follow the steps in this guide.
You are calling CreateMetricDescriptor every time when you call CreateTimeSeries. Some or all of these calls specify no metric labels, and these calls are therefore overwriting the metric descriptor with one that has no labels. The calls to ‘CreateTimeSeries’, on the other hand, do specify metric labels, causing the metric labels to be auto-added to the descriptor.
Custom metric names typically begin with custom.googleapis.com/, which differs from the built-in metrics.
When you create a custom metric, you define a string identifier that represents the metric type. This string must be unique among the custom metrics in your Google Cloud project and it must use a prefix that marks the metric as a user-defined metric. For Monitoring, the allowable prefixes are custom.googleapis.com/ and external.googleapis.com/prometheus. The prefix is followed by a name that describes what you are collecting. For details on the recommended way to name a custom metric, see Naming conventions.

Event Hub - Invalid Token error Azure Stream Analytics Input

I am trying to to follow the tutorial below
Azure Tutorial
As noted at the bottom there appear to have been changes since this was created
When I get to the part where I create an input for my stream analytics job, I cannot select an event hub even though there is one in my subscription
So I went to provide the information manually and I get an error stating invalid token
Has anyone got any ideas how to resolve this or can point me to a better/more recent tutorial?
I am looking to stream data in real time
Paul
Thanks for the help here I ended up using the secondary key and that worked fine!
Change to use Secondary connection string or use a different shared policy altogether.
You can use the primary of the new shared access policy.
PS : It is a weird error, sometimes removing the last ";" worked.

Azure function logging doesn't appear to be working

I'm able to successfully call my functions and make them do what I want them to do. The problem is that it doesn't look like the logs are being saved anywhere and I don't see how I can view them. Which I'll want to do in the event of an error. As a test I have my working function just doing a log.Info as soon as it's called. When testing locally it prints the message to the console. I believe I've enabled everything correctly but let me explain what I've done in case I didn't.
In my app service, under Monitoring -> Diagnostic Logs, I have enabled everything. Application Logging (filesystem) verbose, Application Logging (Blob) verbose (with the storage location set), detailed error messages and failed request tracing turned on.
In my function, I'm using the TraceWriter object that's passed to my run method (I started from a template).
Please note that functions are set to require authentication. If I click on the "Monitor" tab nothing appears. It just says "Loading..." forever and there's no information. Perhaps this is because of the authentication?
I used the Azure Storage Explorer to browse to my blob. The "log" blob exists, and I do see a set of nested directories that lead up to now. However it just contains a 354 byte file that contains a few lines of some random info. This file never seems to update or get larger.
I used FTP to try and browse to where the logs might be, but there's no directory on there that contains any log files.
I also went to KUDU for my function app ({myfunctionapp}.scm.azurewebsites.net/azurejobs/#/functions). While I do see that my function was called successfully, I don't see anything from the call to log.Info anywhere.
I tried using a different logger, and as a test did: System.Diagnostics.Trace.TraceError("test error");
I also don't see this message appearing anywhere.
Am I missing something as far as set up goes? Is the problem the fact that I require authentication? If it's the latter, is there still a way to view logs? I definitely have to have auth enabled. Thanks. And if it helps, below are links to what my settings and the monitor tab look like.
Settings: https://postimg.org/image/u57m2xbl5/
Monitor: https://postimg.org/image/uou10arch/
Authentication should not cause any problems with logging and Log.Info should work out of the box, no setup required.
I highly recommend that you enable AlwaysOn for your dedicated function app. The long loading of the Monitor tab could be because your site is in a 'cold' state, where it takes longer to start up.
If you go to {myfunctionapp}.scm.azurewebsites.net/DebugConsole and navigate to LogFiles/Application/Functions do you see any expected logs there? Also, when you run a function from the portal do you see logs in the log window?
Same thing happened to me if I had Fiddler open....close Fiddler and all is good.

Resources