How to execute a second plan, when the first fail in qwery, in JASON AgnetSpeak - agent

I have a little problem with my program, in one secction one agent X send a tell to the agent Y, and this agent, checks his belief base to verify the fact, the problem is when the plan fails, he tried adding a -plan but its not worked, I appreciate any help, I have changed the context of the problem so that it is better understood, but the main idea is the same
/* Initial beliefs and rules */
wedding_guests(john).
wedding_guests(anna).
wedding_guests(bob).
wedding_guests(ed).
/* Initial goals */
/* Plans */
+I_can_enter(R)[source(Ag)] <-
?wedding_guests(R)
.print(R,"You are on the guest list")
riesgo2.
-I_can_enter(R) <-
.print("You can not pass").

failure plans can be used only for goals and, in your case, the plan +i_can_enter... reacts to new beliefs and not new goals. The plan -i_can_enter.... reacts to the deletion of such belief. (note that the lower case in i_can_enter, otherwise it is a variable.)
There are many ways to solve your problem:
use the performative achieve instead of tell, so that the receiver will have a new goal instead of a new belief. The plan then could be +!i_can_enter(R) ... with the plan failure being -!i_can_enter(R) ....
place the test in the context of the plan:
+i_can_enter(R)[source(Ag)] : wedding_guests(R) <- .print(R,"You are on the guest list").
+i_can_enter(R) <- .print("You can not pass").
use askOne performative. In the receiver:
+?can_enter(R,ok) : wedding_guests(R). // R can enter if it is a wedding guest
+?can_enter(R,nok). // R cannot otherwise
in the sender:
.send(receiver_name, askOne, can_enter(john,_), can_enter(_,ok)); // only continue if answer is "ok"
or
.send(receiver_name, askOne, can_enter(john,_), can_enter(_,A));
.print("it is ",A," for john to enter");
The forth term in the .send unifies with the answer.

Related

Determining whether a value is known at plan-time

Terraform allows values to be marked as "unknown" during the plan step, since many values may only be known after apply of certain resources.
Is there any way, during the plan step, to check if a value is known or unknown?
Specifically, I'd like to be able to do something like this:
locals {
foo = "hello world"
bar = uuid()
}
output "foo_known" {
value = knownatplan(local.foo)
}
output "bar_known" {
value = knownatplan(local.bar)
}
Outputs:
foo_known = true
bar_known = false
Where knownatplan would be a function, or some sort of other mechanism, to determine if the value is known at plan time.
There is no mechanism to do this because to do so would break an assumption that Terraform relies on to do its work: that replacing unknown values with known values during the final apply step can only add information, never change information. The above guarantee is fundamental to the concept of planning a change before taking any side-effects.
Other systems whose language doesn't have this mechanism can only provide a hypothetical "dry run" of changes that may not be complete or accurate, whereas Terraform aims to make the additional promise that it will either make the changes as shown in the plan or return an error explaining why it cannot. Any situation where applying the plan succeeds but generates a result other than what the plan reported is always considered to be a bug, either in Terraform Core itself or in the relevant provider. Unknown values are a big part of how Terraform keeps that promise.
I wrote about this in more detail in a blog article Unknown Values: The Secret to Terraform Plan.
I had a little bit time now, and could implement a cool "feature" the inventors of terraform would not much like. ^-^
I am author of terraform-provider-value (github) (terraform registry) and there I have two resources called value_is_fully_known (link) and value_is_known (link). Sounds good? Just look look in the documentation or look here for some examples.
Fundamentally they allow you to have a true or false for a value that might be "(known after apply)" before you actually apply! Here an example:
terraform {
required_providers {
value = {
source = "pseudo-dynamic/value"
version = "0.5.1"
}
}
}
locals {
foo = "hello world"
bar = uuid()
}
resource "value_unknown_proposer" "default" {}
resource "value_is_known" "foo" {
value = local.foo
guid_seed = "foo"
proposed_unknown = value_unknown_proposer.default.value
}
resource "value_is_known" "bar" {
value = local.bar
guid_seed = "bar"
proposed_unknown = value_unknown_proposer.default.value
}
output "foo_known" {
value = value_is_known.foo.result
}
output "bar_known" {
value = value_is_known.bar.result
}
which results into:
Outputs:
bar_known = false
foo_known = true
Before actual apply?
Concretely you are running through two plan-phases, one I call plan-phase and the other one I call apply-phase, which includes another, independent and implicit plan-phase.
The plan-phase is characteristical when you see the plan, the potencial results (most of the times with a lot of "known after apply"-values") and the message "Terraform will perform the following actions:".
The apply-phase on the other side is, when all values are calculated in the view of the provider author, terraform or you when you see "Apply complete!".
It was very challenging to implement an acceptable solution because during the plan-phase the provider has no chance to to save anything because nothing is persistent. Only changes in the implicit plan-phase are stored when apply-phase was successful.
My thoughts
To know before you apply whether a value is known or unknown, can enable you some cool workflows in terraform but they should be very limited. I can only encourage you to use this mechanism as rarely as possible. But if you have some cool usages for it, do you mind to share them? Feel free to open pull request to add an example or begin a discussion.
Any form of feedback (constructive critism, bugs and ideas) I appreciate a lot. =)

what code-instrument should be added to register each http event in MeterRegistry with specific tag & minute value. Event requests are in millions

I need to analyse one http event value which should not be greater than 30mins. & 95% event should belong to this bucket. If it fails send the alert.
My first concern is to get the right metrics in /actuator/prometheus
Steps I took:
As in every http request event, I am getting one integer value called eventMinute.
Using micrometer MeterRegistry, I tried below code
// MeterRegistry meterRegistry ...
meterRegistry.summary("MINUTES_ANALYSIS", tags);
where tag = EVENT_MINUTE which receives some integer value in each
http event.
But this way, it floods the metrics due to millions of event.
Guide me a way please, i am beginner to this. Thanks!!
The simplest solution (which I would recommend you start with) would be to just create 2 counters:
int theThing = //getTheThing()
if(theThing > 30) {
meterRegistry.counter("my.request.counter.abovethreshold").inc()
}
meterRegistry.counter("my.request.counter.total").inc()
You would increment the counter that matches your threshold and another that tracks all requests (or reuse another meter that does that for you).
Then it is simple to setup a chart or alarm:
my_request_counter_abovethreshold/my_request_counter_total < .95
(I didn't test the code. It might need a tiny bit of tweaking)
You'll be able to do a similar thing with DistributionSummary by setting various SLOs (I'm not familiar with them to be able to offer one), but start with something simple first and if it is sufficient, you won't need the other complexity.
There are certain ways to solve this problem
1 ; here is a function which receives tags, name of metrics and a value
public void createOrUpdateHistogram(String metricName, Map<String, String> stringTags, double numericValue)
{
DistributionSummary.builder(metricName)
.tags(tags)
//can enforce slo if required
.publishPercentileHistogram()
.minimumExpectedValue(1.0D) // can take this based on how you want your distibution
.maximumExpectedValue(30.0D)
.register(this.meterRegistry)
.record(numericValue);
}
then it produce metrics like
delta_bucket{mode="CURRENT",le="30.0",} 11.0
delta_bucket{mode="CURRENT", le="+Inf",} 11.0
so as infinte also hold the less than value, so subtract the le=30 from le=+Inf
Another ways could be
public void createOrUpdateHistogram(String metricName, Map<String, String> stringTags, double numericValue)
{
Timer.builder(metricName)
.tags(tags)
.publishPercentiles(new double[]{0.5D, 0.95D})
.publishPercentileHistogram()
.serviceLevelObjectives(new Duration[]{Duration.ofMinutes(30L)})
.minimumExpectedValue(Duration.ofMinutes(30L))
.maximumExpectedValue(Duration.ofMinutes(30L))
.register(this.meterRegistry)
.record((long)timeDifference, TimeUnit.MINUTES);
}
it will only have two le, the given time and +inf
it can be change based on our requirements also it gives us quantile.

How to prevent loops in jointjs / rappid

I'm building an application which uses jointjs / rappid and I want to be able to avoid loops from occuring across multiple cells.
Jointjs already has some examples on how to avoid this in a single cell (connecting an "out" port to an "in" port of the same cell) but has nothing on how to detect and prevent loops from occuring further up in the chain.
To help understand, imagine each cell in the paper is a step to be completed. Each step should only ever be run once. If the last step has an "out" port that connects to the "in" port of the first cell, it will just loop forever. This is what I want to avoid.
Any help is greatly appreciated.
I actually found a really easy way to do this for anyone else who wishes to achieve the same thing. Simply include the graphlib dependancy and use the following:
paper.on("link:connect", function(linkView) {
if(graphlib.alg.findCycles(graph.toGraphLib()).length > 0) {
linkView.model.remove();
// show some error message here
}
});
This line:
graphlib.alg.findCycles(graph.toGraphLib())
Returns an array that contains any loops, so by checking the length we can determine whether or not the paper contains any loops and if so, remove the link that the user is trying to create.
Note: This isn't completely full-proof because if the paper already contains a loop (before the user adds a link) then simply removing the link that the user is creating won't remove any loop that exists. For me this is fine because all of my papers will be created from scratch so as long as this logic is always in place, no loops can ever be created.
Solution through graphlib
Based on Adam's graphlib solution, instead of findCycles to test for loops, the graphlib docs suggests to use the isAcyclic function, which:
returns true if the graph has no cycles and returns false if it does. This algorithm returns as soon as it detects the first cycle.
Therefore this condition:
if(graphlib.alg.findCycles(graph.toGraphLib()).length > 0)
Can be shortened to:
if(!graphlib.alg.isAcyclic(graph))
JointJS functions solution
Look up the arrays of ancestors and successors of a newly connected element and intersect them:
// invoke inside an event which tests if a specific `connectedElement` is part of a loop
function isElementPartOfLoop (graph, connectedElement) {
var elemSuccessors = graph.getSuccessors(connectedElement, {deep: true});
var elemAncestors = connectedElement.getAncestors();
// *** OR *** graph.getPredecessors(connectedElement, {deep: true});
var commonElements = _.intersection(elemSuccessors, elemAncestors);
// if an element is repeated (non-empty intersection), then it's part of a loop
return !_.isEmpty(commonElements);
}
I haven't tested this, but the theory behind the test you are trying to accomplish should be similar.
This solution is not as efficient as using directly the graphlib functions.
Prevention
One way you could prevent the link from being added to the graph is by dealing with it in an event:
graph.on('add', _.bind(addCellOps, graph));
function addCellOps (cell, collection, opt) {
if (cell.isLink()){
// test link's target element: if it is part of a loop, remove the link
var linkTarget = cell.getTargetElement();
// `this` is the graph
if(target && isElementPartOfLoop(this, linkTarget)){
cell.remove();
}
}
// other operations ....
}

CRM PlugIn Pass Variable Flag to New Execution Pipeline

I have records that have an index attribute to maintain their position in relation to each other.
I have a plugin that performs a renumbering operation on these records when the index is changed or new one created. There are specific rules that apply to items that are at the first and last position in the list.
If a new (or existing changed) item is inserted into the middle (not technically the middle...just somewhere between start and end) of the list a renumbering kicks off to make room for the record.
This renumbering process fires in a new execution pipeline...We are updating record D. When I tell record E to change (to make room for D) that of course fires the plugin on update message.
This renumbering is fine until we reach the end of the list where the plugin then gets into a loop with the first business rule that maintains the first and last record differently.
So I am trying to think of ways to pass a flag to the execution context spawned by the renumbering process so the recursion skips the boundary edge business rules if IsRenumbering == true.
My thoughts / ideas:
I have thought of using the Depth check > 1 but that isn't a reliable value as I can't explicitly turn it on or off....it may happen to work but that is not engineering a solid solution that is hoping nothing goes bump. Further a colleague far more knowledgeable than I said that when a workflow calls a plugin the depth value is off and can't be trusted.
All my variables are scoped at the execute level so as to avoid variable pollution at the class level....However if I had a dictionary object, tuple, something at the class level and one value would be the thread id and the other the flag value then perhaps my subsequent execution context could check if the same owning thread id had any values entered.
Any thoughts or other ideas on how to pass context information to a new pipeline would be greatly appreciated.
Per Nicknow sugestion I tried sharedvariables but they seem to be going out of scope...:
First time firing post op:
if (base.Stage == EXrmPluginStepStage.PostOperation)
{
...snip...
foreach (var item in RenumberSet)
{
Context.ParentContext.SharedVariables[recordrenumbering] = "googly";
Entity renumrec = new Entity("abcd") { Id = item.Id };
#region We either add or subtract indexes based upon sortdir
...snip...
renumrec["abc_indexfield"] = TmpIdx + 1;
break;
.....snip.....
#endregion
OrganizationService.Update(renumrec);
}
}
Now we come into Pre-Op of the recursion process kicked off by the above post-op OrganizationService.Update(renumrec); and it seems based upon this check the sharedvariable didn't carry over...???
if (!Context.SharedVariables.Contains(recordrenumbering))
{
//Trace.Trace("Null Set");
//Context.SharedVariables[recordrenumbering] = IsRenumbering;
Context.SharedVariables[recordrenumbering] = "Null Set";
}
throw invalidpluginexception reveals:
Sanity Checks:
Depth : 2
Entity: ...
Message: Update
Stage: PreOperation [20]
User: 065507fe-86df-e311-95fe-00155d050605
Initiating User: 065507fe-86df-e311-95fe-00155d050605
ContextEntityName: ....
ContextParentEntityName: ....
....
IsRenumbering: Null Set
What are you looking for is IExecutionContext.SharedVariables. Whatever you add here is available throughout the entire transaction. Since you'll have child pipelines you'll want to look at the ParentContext for the value. This can all get a little tricky, so be sure to do a lot of testing - I've run into many issues with SharedVariables and looping operations in Dynamics CRM.
Here is some sample (very untested) code to get you started.
public static bool GetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
while (ctx != null)
{
if (ctx.SharedVariables.Contains(keyName))
{
return (bool)ctx.SharedVariables[keyName];
}
else ctx = ctx.ParentContext;
}
return false;
}
public static void SetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
ctx.SharedVariables.Add(keyName, true);
}
A very simple solution: add a bit field to the entity called "DisableIndexRecalculation." When your first plugin runs, make sure to set that field to true for all of your updates. In the same plugin, check to see if "DisableIndexRecalculation" is set to true: if so, set it to null (by removing it from the TargetEntity entirely) and stop executing the plugin. If it is null, do your index recalculation.
Because you are immediately removing the field from the TargetEntity if it is true the value will never be persisted to the database so there will be no performance penalty.

how to check nothing has changed in cucumber?

The business scenario I'm trying to test with cucumber/gherkin (specflow, actually) is that given a set of inputs on a web form, I make a request, and need to ensure that (under certain conditions), when the result is returned, a particular field hasn't changed (under other condition, it does). E.g.
Given I am on the data entry screen
When I select "do not update frobnicator"
And I submit the form
And the result is displayed
Then the frobnicator is not updated
How would I write the step "the frobnicator is not updated"?
One option is to have a step that runs before "I submit the form" that reads something like "I remember the value of the frobnicator", but that's a bit rubbish - it's a horrible leak of an implementation detail. It distracts from the test, and is not how the business would describe this. In fact, I have to explain such a line any time anyone sees it.
Does anyone have any ideas on how this could be implemented a bit nicer, ideally as written?
I disagree with the previous answer.
The gherkin text you felt like you wanted to write is probably right.
I'm going to modify it just a little to make it so that the When step is the specific action that is being tested.
Given I am on the data entry screen
And I have selected "do not update frobnicator"
When I submit the form
Then the frobnicator is not updated
How exactly you Assert the result will depend on how your program updates the frobnicator, and what options that gives you.. but to show it is possible, I'll assume you have decoupled your data access layer from your UI and are able to mock it - and therefore monitor updates.
The mock syntax I am using is from Moq.
...
private DataEntryScreen _testee;
[Given(#"I am on the data entry screen")]
public void SetUpDataEntryScreen()
{
var dataService = new Mock<IDataAccessLayer>();
var frobby = new Mock<IFrobnicator>();
dataService.Setup(x => x.SaveRecord(It.IsAny<IFrobnicator>())).Verifiable();
ScenarioContext.Current.Set(dataService, "mockDataService");
_testee = new DataEntryScreen(dataService.Object, frobby.Object);
}
The important thing to note here, is that the given step sets up the object we are testing with ALL the things it needs... We didn't need a separate clunky step to say "and i have a frobnicator that i'm going to memorise" - that would be bad for the stakeholders and bad for your code flexibility.
[Given(#"I have selected ""do not update frobnicator""")]
public void FrobnicatorUpdateIsSwitchedOff()
{
_testee.Settings.FrobnicatorUpdate = false;
}
[When(#"I submit the form")]
public void Submit()
{
_testee.Submit();
}
[Then(#"the frobnicator is not updated")]
public void CheckFrobnicatorUpdates()
{
var dataService = ScenarioContext.Current.Get<Mock<IDataAccessLayer>>("mockDataService");
dataService.Verify(x => x.SaveRecord(It.IsAny<IFrobnicator>()), Times.Never);
}
Adapt the principle of Arrange, Act, Assert depending on your circumstances.
Think about how you would test it manually:
Given I am on the data entry screen
And the blah is set to "foo"
When I set the blah to "bar"
And I select "do not update frobnicator"
And I submit the form
Then the blah should be "foo"

Resources