How do you handle having different time delays and sending notifications to different users based on the environment they are running in (dev, test & production)?
We are developing long running workflows that we would like to have delay for minutes in our dev and test environments, but need to delay for days in production.
These same workflows need to send their notifications to us in the dev environment and business users in the test and production environments.
What are the best practices for handling these types of situations?
Store the delay values in a list, and just change the values based on which environment you are in.
If you were creating the workflow in Visual Studio, you could vary the delay value based on the host name of the site the workflow is running on.
Related
This probably a weird situation. All the posts I have looked on this topic is the other way around where they want to check this "remove additional files" but in my case I want to have it unchecked but that is giving problems at later stages. To give some context around
We are building around 15 to 20 Azure functions as a wrapper API on top of Dynamics CRM APIs. So the 2 options evaluated are
a) Create each function in it's own function app - this gives us maintenance issue (20 URLs for Dev, SIT,UAT, Stage, Prod, Training is a considerable mess to tackle for along with their managed identities, app registrations etc etc) ,another key reason not to consider this approach is the consumption plan's warm up issues. It is unlikely that all these functions are heavily used but some of them are.
b) the second option, keeping all functions under 1 big function app. This is most preferred way for us as it will take care of most of the above issues. However, the problem with this we observed is - if we have to deploy 1 function, we have to wait for all the functions to be tested and approved and then deploy all functions even if the requirement is to deploy only one function. This is totally a No-No from architectural point of view.
So, we adapted a hybrid approach - in Visual studio, we still maintain multiple function app projects but during the deployment all these functions will be deployed in to Single Function App by using Web Deploy and Unchecking "Remove additional files in target"
The problem now
this is all worked very well for us during our POC, however now when we started deploying using pipe lines in to Staging slot it is becoming problem for us. Let's say when we first deployed function 1 to staging, swap it to production - the stage now has 0 functions and prod has 1 function. Then when we deploy the 2nd azure function, stage now has only 2nd function and if we swap it with production now, the production will get only 2nd azure function and we miss the 1st Azure function totally from production.
Logically it sounds correct to me but wondering if any one can give any suggestions for a work around for this.
Plz let me know if any further details required.
Background
I have a set of logic apps that each call a set function apps which are run in parallel.
Each logic app is triggered to start at a certain time during the night with all staggered an hour apart.
The Azure functions are written using the async pattern and call external APIs.
Problem
Sometimes the logic apps will run fine and complete their execution in a normal time period, and can do so for two or three days in a row.
However sometimes they will take hours or days forcing me to cancel their run.
Can any body shed any light on this might be happening?
Notes
I'm using the latest nuget packages of the durable functions extension
When debugging the functions always complete in a timely fashion
I have noticed that the functions sometimes get stuck at pending.
It appears you have at least two function apps that are configured with the same storage account and task hub name:
AzureConsumptionXXX
AzureComputeXXX
This causes the two function apps to steal messages from each other. If functions in one app do not exist in the other app, then it's very possible for orchestrations to get stuck in a Pending state like this.
The simplest way to mitigate this is to give each function app a unique task hub name. Please see the Task Hubs documentation for more information: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-task-hubs.
I am developing an application using Azure Cloud Service and web api. I would like to allow users that create a consultation session the ability to change the price of that session, however I would like to allow all users 30 days to leave the session before the new price affects the price for all members currently signed up for the session. My first thought is to use queue storage and set the visibility timeout for the 30 day time limit, but this seems like this could grow the queue really fast over time, especially if the message should not run for 30 days; not to mention the ordering issues. I am looking at the task scheduler as well but the session pricing changes are not a recurring concept but more random. Is the queue idea a good approach or is there a better and more efficient way to accomplish this?
The stuff you are trying to do should be done with a relational database. You can use timestamps to record when prices for session changed. I wouldn't use a queue at all for this. A queue is more for passing messages in a distributed system. Your problem is just about tracking what prices changed on what sessions and when. That data should be modeled in a database.
I think this scenario is more suitable to use Azure Scheduler. Programatically create a Job with one time recurrence with set date as 30 days later to run once. Once this job gets triggered automatically by scheduler, assign an action to callback to one of your API/Service to do the price & other required updates and also remove this Job from the scheduler as part of this action to have a clean jobs list. Anyways premium plan of Azure Scheduler Job Collection will give you unlimited number of jobs to run.
Hope this is exactly what you were looking for...
I would consider using Azure WebJobs. A WebJob basically gives you the ability to run a .NET console application within the context of an Azure Web App. It can be run on demand, continuously, or in response to a reoccurring schedule. If your processing requirements are low and allow for it they can also run in the same process that your Web App is running in to save you $$$ as they are free that way.
You could schedule the WebJob to run once or twice per day and examine the situation and react as is appropriate. Since it's really just a .NET worker role you have ultimate flexibility.
Where can I find a great online reference on Lotus Notes Agent. I currently having problems with having simultaneous agents and understanding agents, how it works, best practices, etc? Thanks in advance!
I currently having problems with having simultaneous agents
Based on this comment I take it you are running a scheduled agent?
The way that scheduled agents work is that only one agent from a particular database can be run at one time, even if you have multiple Agent manager (AMGR) threads. Also agents cannot run less then every 5 minutes. The UI will let you put in a lower number, but it will change it.
The other factors to take into account is how long your agent will run for. If it runs for longer then the interval time you setup you will end up backlogging the running time. Also the server can be configured to kill agents that run over a certain time. So you need to make sure the agent runs within that timeframe.
Now to bypass all this you can execute an agent from the Domino console like as follows.
tell amgr run "database.nsf" 'agentName'
This will run in it's own thread outside of the scheduler. Because of this you can create a program document to execute an agent in less then 5 minute intervals and multiple agents within the same database.
This is dangerous in doing this however, as you have to be aware of a number of issues.
As the agent is outside the control of the scheduler you can't kill it as you would in the scheduler.
Running multiple threads can tie up more processes. So while the scheduler will backlog everything if the agent runs longer then the schedule, doing a program document in this instance will crash the server.
You need to be aware of what the agent is doing in the database so that it won't interfere with any other agents in the same database, and can cope if it is run twice in parallel.
For more reading material on this:
Improving Agent Manager Performance.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.help.domino.admin.doc/DOC/H_AGENT_MANAGER_NOTES_INI_VARIABLES.html
Agent Manager trouble shooting.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.help.domino.admin.doc/DOC/H_ABOUT_TROUBLESHOOTING_AGENTS.html
Troubleshooting Agents (Old material but still relevant)
http://www.ibm.com/developerworks/lotus/library/ls-Troubleshooting_agents/index.html
... and related tech notes:
Title: How to run two agents concurrently in the same database using a wrapper agent
http://www.ibm.com/support/docview.wss?uid=swg21279847
Title: How to run multiple agents in the same database using a Program document
http://www.ibm.com/support/docview.wss?uid=swg21279832
I've hit an issue with creating a timer job on demand from within an event handler. It works fine on my dev machine where the user is also the farm administrator. On the staging server (and production too), this user will be different. Apparently it needs to be a farm admin who creates/updates timer jobs as they have access to the configuration db.
I used a timer job to cope with the notion that many items could be updated at once using the datasheet and if that happened, I wanted an update rollup to take place at a defined period after the edits.
I'm now thinking I may have to set up a recurring timer job instead of a "once" job and within the timer job, check for certain conditions being true before doing any work.
Any suggestions on how I could achieve my desired result of having a rollup function run after any updates, but not after every one?
The previous answer is not correct, or at least not correct for SharePoint 2010. You cannot create job definitions in 2010 in this way even with elevated privileges, as they must be created from central administration. I had a similar problem and this was my finding
this is a blog I wrote about that
I would suggest that you make an event receiver that with a delay of say 10 minutes (timer or thread sleep) and register itself in say web property bag, so that another instance would not run. This could solve the problem.
See http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spsecurity.runwithelevatedprivileges.aspx to fix the permissions problem.