I am using scatter-gatter in a Spring Integration flow to call 3 services parallel:
.scatterGather(rlr -> rlr.applySequence(true)
.recipientFlow(f1 -> f1
.channel(c -> c.executor(executorMLCalls))
.route(ifService1NeedsToBeCalled(), routeToService1OrBypassCall()))
.recipientFlow(f2 -> f2
.channel(c -> c.executor(executorMLCalls))
.enrich(this::service2RequestEnricher)
.enrich(this::service2Enricher))
.recipientFlow(f3 -> f3
.channel(c -> c.executor(executorMLCalls))
.enrich(this::service3RequestEnricher)
.enrich(this::service3Enricher)),
agg -> agg.correlationStrategy(msg -> msg.getHeaders().get("corrMLCalls"))
.releaseExpression("size() == 3"),
sgs -> sgs.gatherTimeout(gatherTimeout)
.requiresReply(true)
)
So the service2 and service3 have to be called always but calling the service1 depends on conditions.
Due to this I cannot use the size() == 3 release expression as sometimes it should be size() == 2.
How can I use dynamic value in the expression? Can I call a local method in the release expression?
Thank you!
Regards,
V.
Yes, such an expression is BeanFactory-aware and you can use a # syntax to reach any bean in your application context.
On the other hand there is a:
/**
* #param releaseStrategy the release strategy.
* #return the handler spec.
* #see AbstractCorrelatingMessageHandler#setReleaseStrategy(ReleaseStrategy)
*/
public S releaseStrategy(ReleaseStrategy releaseStrategy) {
Which may make any decission according your environment and MessageGroup as a releasing candidate.
Related
I have encountered the following issue in Alloy. Consider the toy code which tries to capture even labeled entities (V1 is for State and V2 is for ProductStateSet):
enum State {s1, s2, s3, s4, s5, s6}
enum DummySet {b,c}
let ProductStateSet = DummySet->State
pred evenV1 (state: State){
(state = s2) or (state=s4) or (state=s6)
}
pred evensetV1 (stateset: State) {
all state: stateset | evenV1[state]
}
assert a2V1 {
evensetV1[(s2 + s4)]
}
pred evenV2 (state: ProductStateSet){
(state = b->s2) or (state=b->s4) or (state=b->s6)
}
assert a1V2 {
evenV2[b->s2]
}
pred evensetV2 (stateset: ProductStateSet) {
all state: stateset | evenV2[state]
}
assert a2V2 {
evensetV2[ (b->s2) + (b->s4) ]
}
The assertion a2V1 is true, but a2V2 is false, when I would have expected them to be the same. Why is this so, and what is the proper way to use quantifiers when dealing with set products?
If I change "evenset" to have "some" rather than "all", no issues with evensetV1, but for evensetV2 I get:
pred evensetV2 (stateset: ProductStateSet) {
some state: stateset | evenV2[state]
}
assert a2V2 {
evensetV2[ (b->s2) + (b->s4) ]
}
Executing "Check a2V2"
Solver=sat4j Bitwidth=4 MaxSeq=4 SkolemDepth=1 Symmetry=20
Generating CNF...
.
Analysis cannot be performed since it requires higher-order
quantification that could not be skolemized.
Another question for this example regarding set comprehension: I can write an assertion like:
assert a3V1{
#{state: State | evenV1[state]} > 2
}
Is there a way to print out the set elements, that is, can I print out the below set?
{state: State | evenV1[state]}
Thanks!
Regarding your first question:
EDITED (first answer was wrong)
Running the assertion gives 2 counter examples.
One is expected, in the sense that none -> none is not considered in the evenV2 predicate. But the other (see below) doesn't make sense for me.
My only logical explanation is that the quantification variable "state" of evenSetV2 is badly interpreted when given in parameter of evenV2 even though it seems somehow farfetched ...
I think whenever possible, one should avoid quantification and prefer more straightforward set operations. Implementing the predicate and assertion as follows solves the problem (you don't even need to differentiate between the singleton and the set approach anymore):
pred evenV2 ( state: ProductStateSet){
DummySet.state in (s2+s4+s6)
}
assert a2V2 {
evenV2[ (b->s4)+ (b->s6)]
}
END EDIT
For the second question, the Alloy evaluator is your friend.
I'm using Quantum to handle cron jobs. The setting is the following:
application.ex
def start
...
children = [
...
worker(MyApp.Scheduler, [])
]
opts = [strategy: :one_for_one, name: MyApp.Supervisor]
Supervisor.start_link(children, opts)
end
config.exs
config :My_app, MyApp.Scheduler,
jobs: [
{"*/5 * * * *", fn -> Mix.Task.run "first_mix_task" end},
{"*/5 * * * *", fn -> Mix.Task.run "second_mix_task" end},
{"*/5 * * * *", fn -> Mix.Task.run "third_mix_task" end},
{"*/5 * * * *", fn -> Mix.Task.run "fourth_mix_task" end}
]
The problem is, for some reason, Mix tasks run only the first time after cron jobs are added. Later, although I can see in the logs crons are started and ended (according to Quantum), Mix tasks are never triggered.
I'm not including the mix tasks here because they work fine the first run and also when called from console. So I think the issue has to be in the settings I'm including here. But if you have a good reason to look there just let me know.
Mix.Task.run/1 only executes a task the first time it's called, unless it is re-enabled.
Runs a task with the given args.
If the task was not yet invoked, it runs the task and returns the
result.
If there is an alias with the same name, the alias will be invoked instead of the original task.
If the task or alias were already invoked, it does not run them again
and simply aborts with :noop.
https://hexdocs.pm/mix/Mix.Task.html#run/2
You can use Mix.Task.rerun/1 instead of Mix.Task.run/1 to re-enable and invoke the task again:
...
{"*/5 * * * *", fn -> Mix.Task.rerun "first_mix_task" end},
...
I'm porting a C library to Go. A C function (with varargs) is defined like this:
curl_easy_setopt(CURL *curl, CURLoption option, ...);
So I created wrapper C functions:
curl_wrapper_easy_setopt_str(CURL *curl, CURLoption option, char* param);
curl_wrapper_easy_setopt_long(CURL *curl, CURLoption option, long param);
If I define function in Go like this:
func (e *Easy)SetOption(option Option, param string) {
e.code = Code(C.curl_wrapper_easy_setopt_str(e.curl, C.CURLoption(option), C.CString(param)))
}
func (e *Easy)SetOption(option Option, param long) {
e.code = Code(C.curl_wrapper_easy_setopt_long(e.curl, C.CURLoption(option), C.long(param)))
}
The Go compiler complains:
*Easy·SetOption redeclared in this block
So does Go support function (method) overloading, or does this error mean something else?
No it does not.
See the Go Language FAQ, and specifically the section on overloading.
Method dispatch is simplified if it doesn't need to do type matching as well. Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice. Matching only by name and requiring consistency in the types was a major simplifying decision in Go's type system.
Update: 2016-04-07
While Go still does not have overloaded functions (and probably never will), the most useful feature of overloading, that of calling a function with optional arguments and inferring defaults for those omitted can be simulated using a variadic function, which has since been added. But this comes at the loss of type checking.
For example: http://changelog.ca/log/2015/01/30/golang
According to this, it doesn't: http://golang.org/doc/go_for_cpp_programmers.html
In the Conceptual Differences section, it says:
Go does not support function overloading and does not support user defined operators.
Even though this question is really old, what I still want to say is that there is a way to acheive something close to overloading functions. Although it may not make the code so easy to read.
Say if you want to overload the funtion Test():
func Test(a int) {
println(a);
}
func Test(a int, b string) {
println(a);
println(b);
}
The code above will cause error. However if you redefine the first Test() to Test1() and the second to Test2(), and define a new function Test() using go's ..., you would be able to call the function Test() the way it is overloaded.
code:
package main;
func Test1(a int) {
println(a);
}
func Test2(a int, b string) {
println(a);
println(b);
}
func Test(a int, bs ...string) {
if len(bs) == 0 {
Test1(a);
} else {
Test2(a, bs[0]);
}
}
func main() {
Test(1);
Test(1, "aaa");
}
output:
1
1
aaa
see more at: https://golangbyexample.com/function-method-overloading-golang/ (I'm not the author of this linked article but personally consider it useful)
No, Go doesn't have overloading.
Overloading adds compiler complexity and will likely never be added.
As Lawrence Dol mentioned, you could use a variadic function at the cost of no type checking.
Your best bet is to use generics and type constraints that were added in Go 1.18
To answer VityaSchel's question, in the comments of Lawrence's answer, of how to make a generic sum function, I've written one below.
https://go.dev/play/p/hRhInhsAJFT
package main
import "fmt"
type Number interface {
int | int8 | int16 | int32 | int64 | uint | uint8 | uint16 | uint32 | uint64 | float32 | float64
}
func Sum[number Number](a number, b number) number {
return a + b
}
func main() {
var a float64 = 5.1
var b float64 = 3.2
println(Sum(a, b))
var a2 int = 5
var b2 int = 3
println(Sum(a2, b2))
}
In Linux 2.6.11.12, before the shedule() function to select the "next" task to run, it will lock the runqueue
spin_lock_irq(&rq->lock);
and the, before calling context_switch() to perform the context switching, it will call prepare_arch_switch(), which is a no-op by default:
/*
* Default context-switch locking:
*/
#ifndef prepare_arch_switch
# define prepare_arch_switch(rq, next) do { } while (0)
# define finish_arch_switch(rq, next) spin_unlock_irq(&(rq)->lock)
# define task_running(rq, p) ((rq)->curr == (p))
#endif
that is, it will hold the rq->lock until switch_to() return, and then, the macro finish_arch_switch() actually releases the lock.
Suppose that, there are tasks A, B, and C. And now A calls schedule() and switch to B (now, the rq->lock is locked). Sooner or later, B calls schedule(). At this point, how would B to get rq->lock since it is locked by A?
There is also some arch-dependent implememtation, such as:
/*
* On IA-64, we don't want to hold the runqueue's lock during the low-level context-switch,
* because that could cause a deadlock. Here is an example by Erich Focht:
*
* Example:
* CPU#0:
* schedule()
* -> spin_lock_irq(&rq->lock)
* -> context_switch()
* -> wrap_mmu_context()
* -> read_lock(&tasklist_lock)
*
* CPU#1:
* sys_wait4() or release_task() or forget_original_parent()
* -> write_lock(&tasklist_lock)
* -> do_notify_parent()
* -> wake_up_parent()
* -> try_to_wake_up()
* -> spin_lock_irq(&parent_rq->lock)
*
* If the parent's rq happens to be on CPU#0, we'll wait for the rq->lock
* of that CPU which will not be released, because there we wait for the
* tasklist_lock to become available.
*/
#define prepare_arch_switch(rq, next) \
do { \
spin_lock(&(next)->switch_lock); \
spin_unlock(&(rq)->lock); \
} while (0)
#define finish_arch_switch(rq, prev) spin_unlock_irq(&(prev)->switch_lock)
In this case, I'm very sure that this version will do things right since it unlock the rq->lock before calling context_switch().
But what happens to the default implementation? How it can do things right?
I found a comment in context_switch() of linux 2.6.32.68, that tells the story under the code:
/*
* Since the runqueue lock will be released by the next
* task (which is an invalid locking op but in the case
* of the scheduler it's an obvious special-case), so we
* do an early lockdep release here:
*/
yet we don't switch to another task with the lock locked, the next task will unlock it, and if the next task is newly created, the function ret_from_fork() will also eventually call finish_task_switch() to unlock the rq->lock
I'm following an example in Daniel Jackson's excellent book (Software Abstractions), specifically the example in which he has a token-ring setup in order to elect a leader.
I'm attempting to extend this example (Ring election) to ensure that the token, instead of being limited to one, is being passed around to all members within the provided time (and each member only being elected once, not multiple times). However (mostly due to my inexperience in Alloy), I'm having issues figuring out the best way. Initially I'd thought that I could play with some of the operators (changing -'s to +'s), but I don't seem to be quite hitting the nail on the head.
Below is the code from the example. I've marked up a few areas with questions...any and all help is appreciated. I'm using Alloy 4.2.
module chapter6/ringElection1 --- the version up to the top of page 181
open util/ordering[Time] as TO
open util/ordering[Process] as PO
sig Time {}
sig Process {
succ: Process,
toSend: Process -> Time,
elected: set Time
}
// ensure processes are in a ring
fact ring {
all p: Process | Process in p.^succ
}
pred init [t: Time] {
all p: Process | p.toSend.t = p
}
//QUESTION: I'd thought that within this predicate and the following fact, that I could
// change the logic from only having one election at a time to all being elected eventually.
// However, I can't seem to get the logic down for this portion.
pred step [t, t': Time, p: Process] {
let from = p.toSend, to = p.succ.toSend |
some id: from.t {
from.t' = from.t - id
to.t' = to.t + (id - p.succ.prevs)
}
}
fact defineElected {
no elected.first
all t: Time-first | elected.t = {p: Process | p in p.toSend.t - p.toSend.(t.prev)}
}
fact traces {
init [first]
all t: Time-last |
let t' = t.next |
all p: Process |
step [t, t', p] or step [t, t', succ.p] or skip [t, t', p]
}
pred skip [t, t': Time, p: Process] {
p.toSend.t = p.toSend.t'
}
pred show { some elected }
run show for 3 Process, 4 Time
// This generates an instance similar to Fig 6.4
//QUESTION: here I'm attempting to assert that ALL Processes have an election,
// however the 'all' keyword has been deprecated. Is there an appropriate command in
// Alloy 4.2 to take the place of this?
assert OnlyOneElected { all elected.Time }
check OnlyOneElected for 10 Process, 20 Time
This network protocol is exactly about how to elect a single process to be the leader, so I don't really understand the meaning of your idea of having "all processes elected eventually".
instead of all elected.Time, you can equivalently write elected.Time = Process (since the type of elected is Process -> Time). This just says that elected.Time (all processes elected at any time step) is exactly the set of all processes, which, obviously, doesn't mean that "only one process is elected", as suggested by the name of your assertion.