I'm outputting a Stream Analytics job to PowerBI.com. It successfully sends the first 11-100 messages just fine, but after which it fails. In the operational log it says the operation "failed to send events" and is categorized as a "PowerBIOutputAdapterTransientError" without much other information. What are the symptoms of this type of error?
Messages are still going through Event Hubs but all operations seem to be haulted on the PowerBI side.
Looks like this was a transient service issue.
Related
sorry if this is a dumb question, but I cannot seem to find the answer:
I am using an external source to read in Audit Log events in Azure Eventhub. I am able to get the data flowing and working, but I see that there are messages with the records field that have 2 messages, but some records only have 1 message. For those records that have 2 json events in them, why is this the case? I see that they might be related.
What I mean is that some logs I will see for some:
category:NoninteractiveSignin:
records:[{..},{..}]
Event Hub messages are binary, and opaque to Event Hubs. It’s entirely up to the sender what’s in each one.
So you’ll need to ask whatever application creates the messages about that.
I have a request to implement a dashboard with the information about which message in Azure Service Bus queue was completed when (with some info about message parameters). Unfortunately we do not have an access to the reciever's code and cannot change the code to log the time of the message delivery. So, we need to subscribe somehow to a moment when reciever takes away the message.
I have already investigated Azure portal API in order to find something, but there is no such a possibility, I have tried to find something on stackoverflow and in Google, but no results.
There is 1 idea: use 2 queues and azure function between them. Put all messages to the first queue, azure function recieves a message, logs the info about the message and puts it to the second queue and waits until other services takes the message away from the second queue. Second queue will always have only 1 message and this way we will be able to understand what message was for sure delivered and when.
However what I do not like is the second message queue executes not the role of the real queue (it means something is wrong here and I need to use something else), performance of such a system can be not high enough...
Any help is appreciated (articles, videos, ideas). Thank you.
I have 65k records in Azure Service Bus Topic, while testing, whenever my test application is started, it reads all the 65k records. Can you please help me how can we avoid reading messages that have already read or How can we read only the messages that are send after executing test application?
From the question, it's unclear what exactly you're after. Here are a few things for consideration.
Queues/subscriptions are intended to be read by the consumers, not to store messages and access conditionally. To avoid consuming messages, you should consume those either by using ReceiveAndDelete receiving more, or PeekLock and completing the received messages.
If these messages are test messages and are not intended for the production, do not mix the environments and use different namespaces.
Alternatively, set a short TimeToLive on your test messages to get rid of those. You could also drop the entity and recreate it, but I try to avoid this if your performing testing quite often.
We've been having a few issues with Azure IotHub. I have a stream analytics job listening to an IotHub. My stream analytics job which was working perfectly fine just started showing no input and output. On restart it came up with the following error "Stream Analytics job has validation errors: Querying EventHub returned an error: ProtocolName." Which sort of indicates to me that it can't listen to IotHub anymore. Has anyone else had similar issues?. Help on troubleshooting this would be great.
stream analytics job error
There was an issue with EventHub that has since been addressed which should fix this. If this problem persists, please contact support.
Is there a way to capture and redirect data error events/rows to a separate output?
For example, say I have events coming through and for some reason there are data conversion errors. I would like to handle those errors and do something, probably a separate output for further investigation?
Currently in stream analytics error policy, if an event fails to be written to output we only got two options
Drop - which just drops the event (or)
Retry - retries writing the event until it succeeds
Collecting all error events is not supported currently. You can enable diagnostic logs and get a a sample of every kind of error at frequent intervals.
Here is the documentation link.
If there is a way for you to filter such events in the query itself, then you could redirect such events to a different output and reprocess that later.