Anylogic: Release specific resource - resources

I've got another small issue with AnyLogic resources.
I want to be able to release a specific resource from a resource pool - not just any resource from the pool. The reason is that I occasionally seize multiple resources from a ResourcePool (one at a time) and then wish to release the resources again one at a time. But I don't want to release "any" resource from the pool, I want to be able to specify which specific resource of the pool to release.
Is this possible or is this one of the limitations of the resources implementation?

One way to do this that worked before for us is to use separate agents to grab resources. So for example:
Suppose there is the main WorkItem agent then
When a resource is needed, a Split block is used to spawn a new agent called ResourceHolder
The new ResourceHolder then grabs the resource using normal Seize
Afterwards ResourceHolder carrying the unit is joined back to the WorkItem using Combine.
The ResourceHolder has to be stored somewhere in WorkItem and it should be built to be able to tell which resource unit it is carrying (i.e. original resource pool, type of resource, when it was grabbed, etc.). Then when only a specific resource unit needs to be released the model needs to find the right ResourceHolder in the WorkItem and run it through a Release block. It is a little cumbersome but definitely gives a very fine control over release logic.

I can think of many ways to do this depending on the situation... first one is to use a selectOutput before the release in order to release or not. The selectOutput will check if it's the right resource to release
the other option, if you want to release everything with the same release block but in a given order, you can put a wait block before the release block and wait for the right moment to release the resource
another one, is to use wrap up actions, and put a wait block in the wrap up, to wait for the other resources to arrive there before releasing so they are released in order

The only way to release specific resources, with the standard seize blocks, is to specify that you want to release resources that were seized at a specific seize block
This then implies that you need as many seize and release blocks as you want control over the release process. i.e. if you seize 5 of a resource type and want to release them 1 by 1 over the course of the flow chart you will need 5 seize and 5 release blocks.

Related

terraform locked state file s3 solution

I am trying to fix the well known issue with having multiple pipelines and colleagues are running terraform plan and getting the following error:
│ Error: Error acquiring the state lock
I would like to know if there is any source of the possible ways to get rid of this issue so ci/cd and engineers can run plan without needing to wait for a long time until they are able to work.
Even hashicorp is saying to be careful with force unlock there are risks for multiple writes:
Be very careful with this command. If you unlock the state when
someone else is holding the lock it could cause multiple writers.
Force unlock should only be used to unlock your own lock in the
situation where automatic unlocking failed.
Is there a way that we can write the file to the disk before performing the plan ?
The locking is there to protect you. You may run a plan (or apply) with --lock=false:
terraform plan --lock=false
But I wouldn't encourage that as you may lose the benefits of state locking, it's there to protect you from conflicting modifications made to your infrastructure.
You would like to run a terraform plan against the most recent state which is usually written by the very last apply operation you run on your main/master branch.
If the plan takes too long to apply or to run while your engineers are working on different sub-parts of the infrastructure, you would consider a possible refactoring where you break your infrastructure to multiple folders where you run a separate terraform plan/apply for each of them (src), of course this may come with cost of refactoring and moving resources from a state to another.
One other approach is to disable the state refresh on PR pipelines by setting a --refresh=false which is as well not making you take the full advantages of Terraform's state management with diffs and state locking.
And of course, as a last resort for few exceptions where you have a locked state, you may consider running a manual terraform force-unlock [options] LOCK_ID in few exceptions (for example when a plan gets cancelled, or the runner drops connection so it doesn't release the state).
Resources:
https://developer.hashicorp.com/terraform/language/state/locking
https://cloud.google.com/docs/terraform/best-practices-for-terraform
https://github.com/hashicorp/terraform/issues/28130#issuecomment-801422713

Azure Batch integration with Bitbucket

Is there any way to build and deploy Azure Batch Application Packages when changes are pushed to Bitbacket repository?
I'm looking for the same Deployment approach as for Azure Functions or something like this.
To start with this is what I think on top of my head.
Cool, I will share some information and thoughts around this, I am sure you can make use of the information to help your idea.
There are 2 levels of application package:
Pool level; and
Task level
Detail information here: https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
The pool level is set at the pool level and available to any task joining the pool where as the task level gets unpacked at the creation of task.
Please be aware of the max limits of pkgs etc:
* https://learn.microsoft.com/en-us/azure/batch/batch-quota-limit#other-limits
Key
AFAIK, there is no flag which can tell the vm that the current pkg is updated, hence in your scenario 2 things can happen:
pool level Scenario: if you are joining the pool every time: If you can afford to create pool i.e. at the join pool level then you can keep the package name and every time the code is updated you can recreate the pool which will end up creating the whole thing again i.e. the new package will get picked up.
Task level: If you don't want to create pool all the time, then you can do it by creating new task every time your code gets changed, not the caveat to that will be the max limit which is described at the link above.
Both ways you can do it via user code but deciding between what scenario will suite you depends on the grand architecture of the case.
Information flow possibility at user end
some resource in bit bucket.
User at any change in that resource ==> packs it in *.zip format and then carry on with the batch side of things.
User create the pool or mention task level packages (depending on the detail above); can also add the versions for the same package (beaware of the max limits here)
pkg is available in vm.
Alternate approach:
There is another way this can be done which is non package way:
Mount drive to the node at start task:
And user code has to make user that drive always gets updated will latest version of “*.files”
I hope this help your scenario \ design :), thanks!

Multiple instances of continuous Webjob on single VM in Azure

I have a continuous Webjob running on my Azure Website. It is responsible for doing some work after retrieving items from a QueueTrigger. I am attempting to increase the rate in which the items are processed off the Queue. As I scale out my App Service Plan, the processing rate increases as expected.
My concern is that it seems wasteful to pay for additional VMs just to run additional instances of my Webjob. I am looking for options/best practices to run multiple instances of the same Webjob on a single server.
I've tried starting multiple JobHosts in individual threads within Main(), but either that doesn't work or I was doing something wrong... the Webjob would fail to run due to what looks like each thread trying to access 'WebJobSdk.marker'. My current solution is to publish my Webjob multiple times, each time modifying 'webJobName' slightly in 'webjob-publish-settings.json' so that the same project is considered a different Webjob at publish time. This works great so far, expect that it creates a lot of additional work each time I need to make any update.
Ultimately, I'm looking for some advice on what the recommended way of accomplishing this would be. Ideally, I would like to get the multiple instances running via code, and only have to publish once when I need to update the code.
Any thoughts out there?
You can use the JobHostConfiguration.QueuesConfiguration.BatchSize and NewBatchThreshold settings to control the concurrency level of your queue processing. The latter NewBatchThreshold setting is new in the current in progress beta1 release. However, by enabling "prerelease" packages in your Nuget package manager, you'll see the new release if you'd like to try it. Raising the NewBatchThreshold setting increases the concurrency level - e.g. setting it to 100 means that once the number of currently running queue functions drops below 100, a new batch of messages will be fetched for concurrent processing.
The marker file bug was fixed in this commit a while back, and again is part of the current in progress v1.1.0 release.

Scheduled Tasks with Sql Azure?

I wonder if there's a way to use scheduled tasks with SQL Azure?
Every help is appreciated.
The point is, that I want to run a simple, single line statement every day and would like to prevent setting up a worker role.
There's no SQL Agent equivalent for SQL Azure today. You'd have to call your single-line statement from a background task. However, if you have a Web Role already, you can easily spawn a thread to handle this in your web role without having to create a Worker Role. I blogged about the concept here. To spawn a thread, you can either do it in the OnStart() event handler (where the Role instance is not yet added to the load balancer), or in the Run() method (where the Role instance has been added to the load balancer). Usually it's a good idea to do setup in the OnStart().
One caveat that might not be obvious, whether you execute this call in its own worker role or in a background thread of an existing Web Role: If you scale your Role to, say, two instances, you need to ensure that the daily call only occurs from one of the instances (otherwise you could end up with either duplicates, or a possibly-costly operation being performed multiple times). There are a few techniques you can use to avoid this, such as a table row-lock or an Azure Storage blob lease. With the former, you can use that row to store the timestamp of the last time the operation was executed. If you acquire the lock, you can check to see if the operation occurred within a set time window (maybe an hour?) to decide whether one of the other instances already executed it. If you fail to acquire the lock, you can assume another instance has the lock and is executing the command. There are other techniques - this is just one idea.
In addition to David's answer, if you have a lot of scheduled tasks to do then it might be worth looking at:
lokad.cloud - which has good handling of periodic tasks - http://lokadcloud.codeplex.com/
quartz.net - which is a good all-round scheduling solution - http://quartznet.sourceforge.net/
(You could use quartz.net within the thread that David mentioned, but lokad.cloud would require a slightly bigger architectural change)
I hope it is acceptable to talk about your own company. We have a web based service that allows you to do this. You can click this link to see more details on how to schedule execution of SQL Azure queries.
The overcome the issue of multiple roles executing the same task, you can check for role instance id and make sure that only the first instance will execute the task.
using Microsoft.WindowsAzure.ServiceRuntime;
String g = RoleEnvironment.CurrentRoleInstance.Id;
if (!g.EndsWith("0"))
{
return;
}

How do I auto-start an Azure queue?

I want to build an Azure application that has two worker roles and NO web roles. When the worker roles first start up I want ONLY ONE of the roles to do the following a single time:
Download and parse a master file then enqueue multiple "child" tasks based on the
contents of the master file
Enqueue a single master file download "child" task to run the next day
Each of the "child" tasks would then be done by both of the workers until the task queue was exhausted. Think of the whole things as "priming the pump"
This sort of thing is really easy if I add the the first "master" task manually in a queue by calling a web role but seems to be really hard to do in an auto-start mode.
Any help in this regard would be greatly appreciated!
Thanks.....
One possibility: instead of calling a web role, just load the queue directly. (It sounds like this is the sort of application you'll want to automatically spin up to do some work and then shut down again... if you're automating that, it should be trivial to also automate loading the queue.)
A (perhaps) better option: Use some sort of locking mechanism to make sure only one worker instance does the initialization work. One way to do this is to try to create the queue (or a blob, or an entity in a table). If it already exists, then the other instance is handling initialization. If the create succeeds, then it's this instance's job.
Note that it's always better to use a lease than a lock, in case the instance that's doing the initialization fails. Consider using a timeout (e.g. storing a timestamp in table storage or in the metadata of the blob or in the name of the queue...).
We did end-up with the exact same sort of problem, that's why we introduced a O/C mapper (object to cloud). Basically, you want to introduce two types of cloud services:
QueueService that consumes messages whenever available.
ScheduledService that triggers operations on a scheduled basis.
Then, as others suggested, in the cloud, you really prefer using leases instead of locks, in order to avoid your cloud app to end up freezed forever due to a temporary hardware (or infrastructure) issue.

Resources