Locking multiple external resources in Jenkins - resources

Is it possible to lock multiple external resources to a build in Jenkins? We have tried the External Resource Dispatcher Plugin but did not succeed.

It's not clear if your issue can be solved only through the External Resource Dispatcher plugin (which doesn't seem to have strong active development) but if you can afford to use the Lockable Resources Plugin as pointed by chown, there's a simplified syntax to lock multiple named resources in the Jenkins pipelines, as pointed in this support request:
pipeline {
agent any
options {
// Pipeline scoped multiple resource lock
lock(extra: [[resource: 'resa'], [resource: 'resb']])
}
stages {
stage('Build') {
steps {
// Stage scoped multiple resource lock
lock(extra: [[resource: 'resc'], [resource: 'resd']])
{
// ...
}
}
}
}
}

You should also check out the Lockable Resources Plugin:
This plugin allows to define lockable resources (such as printers, phones, computers, etc.) that can be used by builds. If a build requires an resource which is already locked, it will wait for the resource to be free. One can define a lock-priority globally and on a per-job basis.
https://github.com/jenkinsci/lockable-resources-plugin

There is an option called extra to lock on resources in addition to the main resource specified.
lock(extra: [[resource: 'a']], resource: 'b') {
//code
}
Now any other lock for either 'a' or 'b' will wait for the above lock.
You can find more about it here https://www.jenkins.io/doc/pipeline/steps/lockable-resources/

Related

How can I set the command override on an ECS scheduled task via Terraform?

I'm using this module - https://registry.terraform.io/modules/cn-terraform/ecs-fargate-scheduled-task/aws/latest
I've managed to build the scheduled task with everything except the command override on the container
I cannot set the command override at the task definition level because multiple scheduled tasks implement the same task definition so the command override needs to happen at the scheduled task level as it's unique per scheduled task
I don't see anything that helps in the modules documentation so i'm wondering if there is another way I could do this by either querying for the scheduled task once it's created and using a different module to set the command override?
If you look at the Terraform documentation for aws_cloudwatch_event_target, there is an example in there for an ECS scheduled task with command override. Notice how they are passing the override via the input parameter to the event target.
Now if you look at the source code for the module you are using, you will see they are passing anything you set in the event_target_input variable to the input parameter of the aws_cloudwatch_event_target resource.
So you need to pass the override as a JSON string (I would copy the example JSON string in the Terraform docs and then modify it to your needs) as event_target_input in your module declaration.

How to initialize Chrome extension `storage.local` under Manifest V3 service workers?

I have a Chrome extension where I want to be able to initialize settings if not already set (e.g., upon installation).
I want to do this in a way that the data can be accessed from content scripts and options pages under an assumption that the data has been initialized. Under Manifest V2, I used a background page and initialized the data synchronously with localStorage, which is no longer available under Manifest V3.
A candidate approach would be to run the following code from a service worker:
chrome.storage.local.get(['settings'], function(storage) {
if (!storage.settings) {
chrome.storage.local.set({settings: defaultSettings()});
}
});
However, that seems not guaranteed to work, similar to the example from the migration documentation, since the service worker could be terminated prior to the completion of the asynchronous handling. Additionally, even if the service worker is not terminated, it seems to not be guaranteed that the data would be initialized by time the content script executes.
I would like to initialize the data in a way that it would be guaranteed to be set and available by time a content script executes. A workaround could be to check if the data has been properly initialized in the content script, and use a fallback default value otherwise. Is there some alternative approach that could avoid this extra handling?
since the service worker could be terminated prior to the completion of the asynchronous handling
SW doesn't terminate at any time, there are certain rules, the simplest rule of thumb is that it lives for 30 seconds, the time is auto-prolonged by 30 seconds whenever a subscribed chrome API event is triggered, and by 5 minutes when you have an open chrome.runtime messaging channel/port (currently such ports are auto-closed after 5 minutes in MV3).
So, assuming your example code runs right away, not after a timeout of 29.999 seconds, SW won't terminate during the API call as the storage API takes just a few milliseconds to complete, not 30 seconds. The documentation on ManifestV3 probably tried too hard to sell the non-existent benefits of switching extensions to service workers, so fact-checking was secondary when the article was written.
even if the service worker is not terminated, it seems to not be guaranteed that the data would be initialized by time the content script executes.
Yes.
Some solutions:
Include the defaults in your content scripts and other places. It's the simplest solution that also works in case the user clears the storage via devtools (Storage Area Explorer extension or a console command).
Use chrome.scripting.registerContentScripts (instead of declaring content_scripts in manifest.json) after initializing the storage inside chrome.runtime.onInstalled.
Use messaging in content/options scripts instead of chrome.storage so that the background script is the sole source of truth. When the service worker is already running, messaging will be actually faster as chrome.storage is very slow in Chrome, both local and sync variants, so to make caching truly effective you can use chrome.runtime ports to prolong the lifetime of the service worker to 5 minutes or longer as shown here.
background script:
let settings;
let busy = chrome.storage.local.get('settings').then(r => {
busy = null;
settings = r.settings;
if (!settings) {
settings = defaultSettings();
chrome.storage.local.set({settings});
}
return settings;
});
chrome.runtime.onMessage.addListener((msg, sender, sendResponse) => {
if (msg === 'getSettings') {
if (busy) {
busy.then(sendResponse);
return true;
} else {
sendResponse(settings)
}
}
});
content/option script:
chrome.runtime.sendMessage('getSettings', settings => {
// use settings inside this callback
});

Concurrency in Azure Function with .Net 5 (isolated)

With .Net 5, Azure Functions require the host to be instantiated via Program.cs
class Program
{
static Task Main(string[] args)
{
var host = new HostBuilder()
.ConfigureAppConfiguration(configurationBuilder =>
{
configurationBuilder.AddCommandLine(args);
})
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(services =>
{
services.AddLogging();
})
.Build();
return host.RunAsync();
}
}
If I was to add some global variables in Program.cs (say static) so that they can be accessed by any of the endpoints in the Azure Function project, if the global variable value was changed during the execution of one of these endpoints, is there a chance (even small) that this update propagate into the execution of another endpoint executing just after? I struggle to understand to what extent the Host is concurrent.
These were useful readings, but I did not find what I was looking for:
https://mikhail.io/2019/03/concurrency-and-isolation-in-serverless-functions/
https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#singleton-attribute
Azure Functions - Limiting parallel execution
http://azurefunda.blogspot.com/2018/06/handling-concurrency-in-azure-functions.html
See Azure Functions as stateless workers. If you want them to have state either use Durable Entities or external storage. Implementations can change, even of the underlying Azure Functions runtime. Design accordingly is my advice.
Static global variables are often a bad idea. Especially in this case you can't reliably predict what will happen. The process can be killed, new instances can be brought up / taken down possibly spanning multiple machines due to dynamic scaling. So different executions can see different values of the static variable.
Again, you should design your functions in such a way that you do not have to worry about the underlying mechanisms, otherwise you will have a very tight coupling between the code and the hosting environment.

Does Hazelcast honor a default cache configuration

In the hazelcast documentation there are a few brief references to a cache named "default" - for instance, here:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#jcache-declarative-configuration
Later, there is another mention of cache default configuration here:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#icache-configuration
What I would like is to be able to configure "default" settings that are inherited when caches are created. For instance, given the following configuration snippet:
<cache name="default">
<statistics-enabled>true</statistics-enabled>
<management-enabled>true</management-enabled>
<expiry-policy-factory>
<timed-expiry-policy-factory expiry-policy-type="ACCESSED" time-unit="MINUTES" duration-amount="2"/>
</expiry-policy-factory>
</cache>
I'd like for the following test to pass:
#Test
public void defaultCacheSettingsTest() throws Exception {
CacheManager cacheManager = underTest.get();
Cache cache = cacheManager.createCache("foo", new MutableConfiguration<>());
CompleteConfiguration cacheConfig = (CompleteConfiguration) cache.getConfiguration(CompleteConfiguration.class);
assertThat(cacheConfig.isManagementEnabled(), is(true));
assertThat(cacheConfig.isStatisticsEnabled(), is(true));
assertThat(cacheConfig.getExpiryPolicyFactory(),
is(AccessedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 2l)))
);
}
Ehcache has a "templating" mechanism and I am hoping that I can get a similar behavior.
Hazelcast supports configuration with wildcards. You can use <cache name="*"> for all Caches to share the same configuration, or apply other patterns to group Cache configurations as you wish.
Note that since you already use Hazelcast declarative configuration to configure your Caches, you should use CacheManager.getCache instead of createCache to obtain the Cache instance: Caches created with CacheManager.createCache(..., Configuration) disregard the declarative configuration since they are configured explicitly with the Configuration passed as argument.

Setup webjob ServiceBusTriggers or queue names at runtime (without hard-coded attributes)?

Is there any way to configure triggers without attributes? I cannot know the queue names ahead of time.
Let me explain my scenario here.. I have one service bus queue, and for various reasons (complicated duplicate-suppression business logic), the queue messages have to be processed one at a time, so I have ServiceBusConfiguration.OnMessageOptions.MaxConcurrentCalls set to 1. So processing a message holds up the whole queue until it is finished. Needless to say, this is suboptimal.
This 'one at a time' policy isn't so simple. The messages could be processed in parallel, they just have to be divided into groups (based on a field in message), say A and B. Group A can process its messages one at a time, and group B can process its own one at a time, etc. A and B are processed in parallel, all is good.
So I can create a queue for each group, A, B, C, ... etc. There are about 50 groups, so 50 queues.
I can create a queue for each, but how to make this work with the Azure Webjobs SDK? I don't want to copy-paste a method for each queue with a different ServiceBusTrigger for the SDK to discover, just to enforce one-at-a-time per queue/group, then update the code with another copy-paste whenever another group is needed. Fetching a list of queues at startup and tying to the function is preferable.
I have looked around and I don't see any way to do what I want. The ITypeLocator interface is pretty hard-set to look for attributes. I could probably abuse the INameResolver, but it seems like I'd still have to have a bunch of near-duplicate methods around. Could I somehow create what the SDK is looking for at startup/runtime?
(To be clear, I know how to use INameResolver to get queue name as at How to set Azure WebJob queue name at runtime? but though similar this isn't my problem. I want to setup triggers for multiple queues at startup for the same function to get the one-at-a-time per queue processing, without using the trigger attribute 50 times repeatedly. I figured I'd ask again since the SDK repo is fairly active and it's been a year..).
Or am I going about this all wrong? Being dumb? Missing something? Any advice on this dilemma would be welcome.
The Azure Webjob Host discovers and indexes the functions with the ServiceBusTrigger attribute when it starts. So there is no way to set up the queues to trigger at the runtime.
The simpler solution for you is to create a long time running job and implement it manually:
public class Program
{
private static void Main()
{
var host = new JobHost();
host.CallAsync(typeof(Program).GetMethod("Process"));
host.RunAndBlock();
}
[NoAutomaticTriggerAttribute]
public static async Task Process(TextWriter log, CancellationToken token)
{
var connectionString = "myconnectionstring";
// You can also get the queue name from app settings or azure table ??
var queueNames = new[] {"queueA", "queueA" };
var messagingFactory = MessagingFactory.CreateFromConnectionString(connectionString);
foreach (var queueName in queueNames)
{
var receiver = messagingFactory.CreateMessageReceiver(queueName);
receiver.OnMessage(message =>
{
try
{
// do something
....
// Complete the message
message.Complete();
}
catch (Exception ex)
{
// Log the error
log.WriteLine(ex.ToString());
// Abandon the message so that it can be retry.
message.Abandon();
}
}, new OnMessageOptions() { MaxConcurrentCalls = 1});
}
// await until the job stop or restart
await Task.Delay(Timeout.InfiniteTimeSpan, token);
}
}
Otherwise, if you don't want to deal with multiple queues, you can have a look at azure servicebus topic/subscription and create SqlFilter to send your message to the right subscription.
Another option could be to create your own trigger: The azure webjob SDK provides extensibility points to create your own trigger binding :
Binding Extensions Overview
Good Luck !
Based on my understanding, your needs seems to be building a message batch system in parallel. The #Thomas solution is good, but I think Azure Batch service with Table storage may be better and could be instead of the complex solution of ServiceBus queue + WebJobs with a trigger.
Using Azure Batch with Table storage, you can control the task creation and execute the task in parallel and at scale, even monitor these tasks, please refer to the tutorial to know how to.

Resources