ADF pipeline going into queue state - azure

I have a Copy activity where the source and destination are both Blobs.
When i tried the copy pipeline previously,it ran successfully.
But currently it is going into queue state for a long time i.e. 30 minutes.
Can i know the reason behind it?

This is not an answer/solution to the problem. Since i cannot comment yet, had to put it in the Answer section. It could be delay in assigning the compute resources. Please check the details.
You can check the details by hovering mouse pointer between Name and Type beside Copy.

Related

Azure Logic Apps, how to track steps? Diagnostic settings?

So you can view past runs of Logic Apps(LA)...but if a Loop(with many steps within it) is present in your Logic App, and you stop the LA run(because it seems to run forever/isnt doing what you expect) you cant see what happened in the the Loop.
I want to be able to track the Logic App(LA) progress, i thought about adding an additional table storage step between every step to log where its at, this would work, but thats a daft amount of work just see what your LA is doing.
I tried adding diagnostic/log analytics to the LA but it just seems to give a broader view of the LA runs...not the detail i need. Can someone tell me if diagnositcs can give me the detail im looking for OR if there is another way of doing this. There must be a way.
Thanks.
The past runs should allow you iterate though the runs of the loop, showing the detail of actions within.
If this doesn't satisfy, you can also add Tracked Properties to log specific values from within the loop execution to Log Analytics in an AzureDiagnostics table

Microsoft Flow with File Created Action is not triggered all the time

I have one drive synced local folder and the files will be synced with a SharePoint site when we add files to this folder. I also have a Flow that gets triggered for every file added.
The detailed article about what I am achieving here can be found here.
The problem is that it is not triggered all the time. Let's say I added 100 files and the Flow triggered only 78 times. Are there any limitations on the Flow that it can run only this many times in a timeframe? Anyone else faced this issue? Any help is really appreciated. #sharepoint #sharepointonline #flow #onedrive
Finally, after spending a few hours, I got it working with 120 files at the same time. The flow runs smoothly and efficiently now. Here is what I did.
Click on the three dots on your trigger in the flow, and then click on settings.
Now in the new screen, enable the Split On (Without this my Flow was not getting triggered) and give the Array value. Clicking on the array dropdown will give you the matching value. Now turn on the Concurrency as shown in the preceding image and give the Degree of Parallelism to maximum (50 as of now).
According to Microsoft:
Concurrency Control is to Limit the number of concurrent runs of the flow or leave it off to run as many as possible at the same time. Concurrency control changes the way new runs are queued. It cannot be undone once enabled.

Explanation of AUTOSAR BswMLinScheduleIndication container content

I am new to AUTOSAR and I am quite puzzled by the content of the BswMLinScheduleIndication configuration container. The issue is that this container includes not only a reference to LIN channel handle, but also a reference to LIN schedule table handle. I don't understand that since this container corresponds to mode request source of BswM_LinSM_CurrentSchedule() function. Description of the function states "Function called by LinSM to indicate the currently active schedule table for a specific LIN channel.", so naturally i conclude that currently active schedule table handle is the mode value, but in this case reference to LIN schedule table handle must belong to BswMModeValue container, isn't it? If LIN schedule table handle is not mode value, than what is?
Unfortunately AUTOSAR_EXP_ModeManagementGuide doesn't cover LIN issues.
Thank you in advance for your time and attention. Sorry for my bad english. I understand that my question can be mishaped, please forgive for that, since sometimes it's difficult for newbie event to formulate a right one.
Check the LinSM and the LinIf SWS, which describe the change of the Schedule Tables of a LIN Master (and only the LIN Master). The LinIf switches between a RUN_CONTINOUS and RUN_ONCE schedule table.
Why LinIf needs schedule tables I can not tell. I never had a usage for LIN at work yet. Hope it still helps.

Does Azure create a copy of an Azure Function on the back end?

I am trying to troubleshoot a problem where I run an Azure Function locally on my instance and have it disabled on the Portal. After sending some data through I can see that it successfully hits my local Azure Function but never hits it again after. Strangely enough the data appears to still go through my channels of Queue - Function - Queue - Function but never hits the breakpoints on my local machine after the first successful run. Triple checking the Portal I can see that it is definitely disabled which leads me to believe there might be another instance of the Azure Function running about. I've confirmed that no other devs are working on it so I've also ruled that out...
Looking at https://[MY_FUNCTION_NAME].scm.azurewebsites.net/azurejobs/#/functions I see that there seem to be duplicates of some of my functions with varying statistics on the repeats. My guess is that Azure might be tracking my local instances when I start them but I see the "Successful" green numbers go up on both versions of the function when I pass data through. Blocked out the function names but replaced the matching ones with matching colors (blacked out bars are just single functions I was too lazy to color code). The red circles indicate the function of interest that have different success statistics.
Has anyone else run into this issue?
Turns out there were duplicate functions in a Slot setting... Someone put them there to get deployment options set up but they left the project and never noted it.
Hope this saves someone some frustrations at some point!

Patterns to azure idempotent operations?

anybody know patterns to design idempotent operations to azure manipulation, specially the table storage? The more common approach is generate a id operation and cache it to verify new executions, but, if I have dozen of workers processing operations this approach will be more complicated. :-))
Thank's
Ok, so you haven't provided an example, as requested by knightpfhor and codingoutloud. That said, here's one very common way to deal with idempotent operations: Push your needed actions to a Windows Azure queue. Then, regardless of the number of worker role instances you have, only one instance may work on a specific queue item at a time. When a queue message is read from the queue, it becomes invisible for the amount of time you specify.
Now: a few things can happen during processing of that message:
You complete processing after your timeout period. When you go to delete the message, you get an exception.
You realize you're running out of time, so you increase the queue message timeout (today, you must call the REST API to do this; one day it'll be included in the SDK).
Something goes wrong, causing an exception in your code before you ever get to delete the message. Eventually, the message becomes visible in the queue again (after specified invisibility timeout period).
You complete processing before the timeout and successfully delete the message.
That deals with concurrency. For idempotency, that's up to you to ensure you can repeat an operation without side-effects. For example, you calculate someone's weekly pay, queue up a print job, and store the weekly pay in a Table row. For some reason, a failure occurs and you either don't ever delete the message or your code aborts before getting an opportunity to delete the message.
Fast-forward in time, and another worker instance (or maybe even the same one) re-reads this message. At this point, you should theoretically be able to simply re-perform the needed actions. If this isn't really possible in your case, you don't have an idempotent operation. However, there are a few mechanisms at your disposal to help you work around this:
Each queue message has a DequeueCount. You can use this to determine if the queue message has been processed before and, if so, take appropriate action (maybe examine the Table row for that employee, for example).
Maybe there are stages of your processing pipeline that can't be repeated. In that case: you now have the ability to modify the queue message contents while the queue message is still invisible to others and being processed by you. So, imagine appending something like |SalaryServiceCalled . Then a bit later, appending |PrintJobQueued and so on. Now, if you have a failure in your pipeline, you can figure out where you left off, the next time you read your message.
Hope that helps. Kinda shooting in the dark here, not knowing more about what you're trying to achieve.
EDIT: I guess I should mention that I don't see the connection between idempotency and Table Storage. I think that's more of a concurrency issue, as idempotency would need to be dealt with whether using Table Storage, SQL Azure, or any other storage container.
I believe you can use Reply log storage way to solve this problem

Resources