Dust: How to check if an array which is passed to a dust file contains a certain element or not using dust helpers? - node.js

res.render('home',o);
o={
name:"Free Lunch",
privilege:[3,4],
const:{
RESET:1,
ADD:2,
DELETE:3
MANAGE:4,
REPORT:5
}
}
I want to check if dust-helper if condition whether the value corresponding to REPORT which is 5 is present in the privilege array or not.
<#if cond="......">
Any idea how to do it?

You could iterate over the array and test each element but I can't think how you could use/store the fact after the iteration. I don't know a way to do this with the current language facilities but that is why custom dust helpers are provided. You could build a {#arrayContains key=array value=value} sort of helper. Since you don't say what you want to do once you determine the existence/non-existence, it is unclear what else the helper might do after the check.

{#if cond="('{con.REPORT}' && '{con.REPORT}'.length)"}I EXIST DO SOMETHING{/if}
Before you test, make sure to add a , after the DELETE attribute

Related

Update a parameter value in Brightway

It seems to be a simple question but I have a hard time to find an answer to it. I already have a project with several parameters (project and database parameters). I would like to obtain the LCA results for several scenarios with my parameters having different values each time. I was thinking of the following simple procedure:
change the parameters' value,
update the exchanges in my project,
calculate the LCA results.
I know that the answer should be in the documentation somewhere, but I have a hard time to understand how I should apply it to my ProjectParameters, DatabaseParameters and ActivityParameters.
Thanks in advance!
EDIT: Thanks to #Nabla, I was able to come up with this:
For ProjectParameter
for pjparam in ProjectParameter.select():
if pjparam.name=='my_param_name':
break
pjparam.amount = 3
pjparam.save()
bw.parameters.recalculate()
For DatabaseParameter
for dbparam in DatabaseParameter.select():
if dbparam.name=='my_param_name':
break
dbparam.amount = 3
dbparam.save()
bw.parameters.recalculate()
For ActivityParameter
for param in ActivityParameter.select():
if param.name=='my_param_name':
break
param.amount = 3
param.save()
param.recalculate_exchanges(param.group)
You could import DatabaseParameter and ActivityParameter iterate until you find the parameter you want to change, update the value, save it and recalculate the exchanges. I think you need to do it in tiers. First you update the project parameters (if any) then the database parameters that may depend on project parameters and then the activity parameters that depend on them.
A simplified case without project parameters:
from bw2data.parameters import ActivityParameter,DatabaseParameter
# find the database parameter to be updated
for dbparam in DatabaseParameter.select():
if (dbparam.database == uncertain_db.name) and (dbparam.name=='foo'):
break
dbparam.amount = 3
dbparam.save()
#there is also this method if foruma depend on something else
#dbparam.recalculate(uncertain_db.name)
# here updating the exchanges of a particular activity (act)
for param in ActivityParameter.select():
if param.group == ":".join(act.key):
param.recalculate_exchanges(param.group)
you may want to update all the activities in the project instead of a single one like in the example. you just need to change the condition when looping through the activity parameters.

AEM Query builder exclude a folder in search

I need to create a query where the params are like:
queryParams.put("path", "/content/myFolder");
queryParams.put("1_property", "myProperty");
queryParams.put("1_property.operation", "exists");
queryParams.put("p.limit", "-1");
But, I need to exclude a certain path inside this blanket folder , say: "/content/myFolder/wrongFolder" and search in all other folders (whose number keeps on varying)
Is there a way to do so ? I didn't find it exactly online.
I also tried the unequals operation as the parent path is being saved in a JCR property, but still no luck. I actually need unlike to avoid all occurrences of the path. But there is no such thing:
path=/main/path/to/search/in
group.1_property=cq:parentPath
group.1_property.operation=unequals
group.1_property.value=/path/to/be/avoided
group.2_property=myProperty
group.2_property.operation=exists
group.p.or=true
p.limit=-1
This is an old question but the reason you got more results later lies in the way in which you have constructed your query. The correct way to write a query like this would be something like:
path=/main/path/where
property=myProperty
property.operation=exists
property.value=true
group.p.or=true
group.p.not=true
group.1_path=/main/path/where/first/you/donot/want/to/search
group.2_path=/main/path/where/second/you/donot/want/to/search
p.limit=-1
A couple of notes: your group.p.or in your last comment would have applied to all of your groups because they weren't delineated by a group number. If you want an OR to be applied to a specific group (but not all groups), you would use:
path=/main/path/where
group.1_property=myProperty
group.1_property.operation=exists
group.1_property.value=true
2_group.p.or=true
2_group.p.not=true
2_group.3_path=/main/path/where/first/you/donot/want/to/search
2_group.4_path=/main/path/where/second/you/donot/want/to/search
Also, the numbers themselves don't matter - they don't have to be sequential, as long as property predicate numbers aren't reused, which will cause an exception to be thrown when the QB tries to parse it. But for readability and general convention, they're usually presented that way.
I presume that your example was just thrown together for this question, but obviously your "do not search" paths would have to be children of the main path you want to search or including them in the query would be superfluous, the query would not be searching them anyway otherwise.
AEM Query Builder Documentation for 6.3
Hope this helps someone in the future.
Using QueryBuilder you can execute:
map.put("group.p.not",true)
map.put("group.1_path","/first/path/where/you/donot/want/to/search")
map.put("group.2_path","/second/path/where/you/donot/want/to/search")
Also I've checked PredicateGroup's class API and they provide a setNegated method. I've never used it myself, but I think you can negate a group and combine it into a common predicate with the path you are searching on like:
final PredicateGroup doNotSearchGroup = new PredicateGroup();
doNotSearchGroup.setNegated(true);
doNotSearchGroup.add(new Predicate("path").set("path", "/path/where/you/donot/want/to/search"));
final PredicateGroup combinedPredicate = new PredicateGroup();
combinedPredicate.add(new Predicate("path").set("path", "/path/where/you/want/to/search"));
combinedPredicate.add(doNotSearchGroup);
final Query query = queryBuilder.createQuery(combinedPredicate);
Here is the query to specify operator on given specific group id.
path=/content/course/
type=cq:Page
p.limit=-1
1_property=jcr:content/event
group.1_group.1_group.daterange.lowerBound=2019-12-26T13:39:19.358Z
group.1_group.1_group.daterange.property=jcr:content/xyz
group.1_group.2_group.daterange.upperBound=2019-12-26T13:39:19.358Z
group.1_group.2_group.daterange.property=jcr:content/abc
group.1_group.3_group.relativedaterange.property=jcr:content/courseStartDate
group.1_group.3_group.relativedaterange.lowerBound=0
group.1_group.2_group.p.not=true
group.1_group.1_group.p.not=true

Validate value in field

Is there anyway to check when you type in to a field if there already are any document saved with that value in that field. Ex, if you type projectno i want to check if any other document already have that projectno. Any suggestion how i will validate that
Regards
You need a view in the database that is sorted in the first column by the field that you are using. I will assume it is a hidden view, called "(lookupUnique)". Build it and test it to make sure it is showing the field that you want in the first column, and that the values are sorted.
Now you need a way to do a lookup into this view. Ideally, you're wanting the lookup to fail -- because there is no document with the same value, in which case you allow the save to continue. But there's one other case where you might want to allow the save to continue. That's the case where the lookup succeeds because the lookup found the document that you are working on right now, which was previously saved and therefore is found in the view, and a user is now editing it again.
The #DbLookup function with the [RETURNDOCUMENTUNIQUEID] and [FAILSILENT] arguments is the IBM-recommended solution for this. I.e.,
foundId := #DbLookup("Notes":"NoCache";"":"";"(lookupUniqe)";theUniqueFieldNameGoesHereWithoutQuotes;1;[RETURNDOCUMENTUNIQUEID]);
If this formula returns "", then no match was found, therefore your code should return #Success to let the save continue. If it returns anything else, then compare the result with #DocumentUniqueId. If they match, then your code should return #Success to let the save continue. If they do not match, then you have found another document with the same value in the field, so your code should return #Failure with an appropriate error message.
Now here's the caveat: there have been known problems with [RETURNDOCUMENTUNIQUEID] in some versions of Domino, including a bug that caused Domino 6 servers to crash if an agent called ComputeWithForm on a document based on a form that used this feature. There's also a bug that causes it to return only the unid of the first match out of many matches, and so if you have duplicates this strategy in your code will allow users to re-save old documents that are already non-unique instead of forcing them to change them to make them unique, and that may or may not be what you want.
If either of those known issues might create a problem for you, then you would be better off not using [RETURNDOCUMENTUNIQUEID], and instead just do what Notes and Domino programmers did before IBM added the [RETURNDOCUMENTUNIQUEID] option in the first place: add another column to your (lookupUnique) view, and set the column value to #Text(#DocumentUniqueId). Change the 1 in the above #DbLookup formula to the number of the column that you added, and write your validation code to anticipate the possibility that you might get back an empty string, a single value, or a list of values.
If a type 45678 i return a value because there already are a document with that value. I don’t understan how i will validate it.
var dbname = session.getServerName() + "!!" + "proj\\webno.nsf";
getFieldValue = getComponent("oNo").getValue();
tmp = #DbLookup(dbname, "(webNo)", getFieldValue, ”obNo”);
if (tmp == getFieldValue)
{
Here i will do a validate. If value i return are the same as in the getFieldValue
and tmp or just getFieldValue is empty.
}
else
{
Here is it OK
}
Taking your code and modifying it. Assuming we're in the database we're creating the document in, just use #DbName() instead of trying to build the name from the session and some hard-coding. When using validation, the value of the control should be accessible simply with value. Then, just get all the values in the column and see if your value is in there.
I think the following should work.
<xp:inputText id="projectNumber" value="#{doc.ProjectNumber}">
<xp:this.validators>
<xp:validateExpression message="Value already in use">
<xp:this.expression><!CDATA[#{javascript:var usedValues = #DbColumn(#DbName(), "(webNo)", 1);
if ( #IsMember ( value, usedValues ) ) { return false };
return true;
</xp:this.expression>
</xp:validateExpression>
</xp:this.validators>
</xp:inputText>
Why don't you just generate a value for them? The simplest would be to use #Unique, but there are plenty of other ways besides having them have to create one.....

Masking answer options in Confirmit (jscript)

I'm trying to mask the answer options that show up in a 3DGrid question item in Confirmit, using the value of a background variable.
E.g. when "background1" ==1, display answer category 1. If "background1" ==0, do not display answer category 1. If "background2" ==1, display category 3, otherwise do not. In any case, display answer category 2.
Hopefully this is easy for someone out there (I'm a psychologist, not a coder...so not so much so for me :/)
Thanks!
In order to access the data inside a question/variable we can use the f function of confirmit.
for instance:
f('my_question_id').get();
When masking a question, we need to pass in a Set object so Confirmit knows what Code's to show and not to show.
Often you will mask using a Set from a previous question. So you pass in the question_id and Confirmit does all the other magic.
Here we have the problem of not having a Set, so we will have to create our own.
For this, there are 2 approaches (can be found in the scripting manual under Working with Sets > Methods of the set Object > add and remove and Working with Sets > User defined functions...)
I'm going to stick to the first one because it is easier to use ;)
What we will do first is create a script node (it doesn't matter where you create it, just somewhere in the survey, I often have a folder Functions with all my script nodes in somewhere at the bottom of my survey)
In that script file we will have our function that crates our set:
function CreateMyAwesomeSet()
{
//create an empty Set
var mySet = new Set();
//if background1 equals 1, add 1 to our Set
if ( f('background1').get() == '1' )
{
mySet.add(1);
}
//return the Set of allowed Codes
return mySet;
}
Here we declare a function that we now can use wherever we want to.
So now, If we want to use this Set, we add a Code Mask to your grid:
CreateMyAwesomeSet()
You can ofcourse change the name of the function, and add extra if statements.
hope this helps

Referencing external doc in CouchDB view

I am scraping an 90K record database using JSON-RPC and I am trying to put in some basic error checking. I want to start by scraping the database twice using two different settings and adding a prefix to the second scrape. This way I can check to ensure that the two settings are not producing different records (due to dropped updates, etc). I wanted to implement the comparison using a view which compares each document from the first scrape with it's twin produced by the second scrape and then emit the names of records with a difference between them.
However, I cannot quite figure out how to pull in another doc in the view, everything I have read only discusses external docs using the emit() function, which is too late to permit me to compare it. In the example below, the lookup() function would grab the referenced document.
Is this just not possible?
function(doc) {
if(doc._id.slice(0,1)!=='$' && doc._id.slice(0,1)!== "_"){
var otherDoc = lookup('$test" + doc._id);
if(otherDoc){
var keys = doc.value.keys();
var same = true;
keys.forEach(function(key) {
if ((key.slice(0,1) !== '_') && (key.slice(0,1) !=='$') && (key!=='expires')) {
if (!Object.equal(otherDoc[key], doc[key])) {
same = false;
}
}
});
if(!same){
emit(doc._id, 1);
}
}
}
}
Context
You are correct that this is not possible in CouchDB. The whole point of the map function is that it must be idempotent, otherwise you lose all the other nice benefits of a pre-calculated index.
This is why you cannot access external resources in the map function, whether they be other records or the clock. Any time you run a map you must always get the same result if you put the same record into it. Since there are no relationships between records in CouchDB, you cannot promise that this is possible.
Solution
However, you can still achieve your end goal, just be different means. Some possibilities...
Assuming there is some meaningful numeric value in each doc, you could use a view to take the sum of all those values and group them by which import you did ({key: <batch id>, value: <meaningful number>}). Then compare the two numbers in your client or the browser to see if they match.
A brute force approach would be to use a view to pair the docs that should match. Each doc is on a different row, but they're grouped by a common field. Then iterate through the entire index comparing the pairs. This would certainly be the quickest to code and doesn't depend on your application or data.
Implement a validation function to enforce a schema on your data. Just be warned that this will reduce your write throughput since each written record will be piped out of Erlang and into the JS engine. Also, this is only applicable if you're worried about properly formed records instead of their precise content, which might not be the case.
Instead of your different batch jobs creating different docs, have them place them into the same doc. The structure might look like this: { "_id": "something meaningful", "batch_one": { ..data.. }, "batch_two": { ..data.. } } Then your validation function could compare them or you could create a view that indexes all the docs that don't match. All depends on where in your pipeline you want to do the error checking and correction.
Personally I like the last option better, but only if you don't plan to use the database as is in production. Ie., you wouldn't want to carry around all that extra data in each record.
Hope that helps.
Cheers.

Resources