Nodejs (Infinispan) : Does Infinispan put method returns null for key inserted in cache for first time? - node.js

I have been reviewing the infinispan documentation and overloaded put method returns the value being replaced, or null if nothing is being replaced.
I am using overloaded put method with nodejs and it's not returning expected data, getting undefined.
how can I achieve this with nodejs?
Looked at the documentation, need assistance to understand the behavior with Nodejs
Documentation Link : https://docs.jboss.org/infinispan/9.2/apidocs/org/infinispan/commons/api/BasicCache.html#put-K-V-
V put(K key,
V value,
long lifespan,
TimeUnit unit)
An overloaded form of put(Object, Object), which takes in lifespan parameters.
Parameters:
key - key to use
value - value to store
lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
unit - unit of measurement for the lifespan
Returns:
the value being replaced, or null if nothing is being replaced.
Looked at the documentation, need assistance to understand the behavior with Nodejs

From https://github.com/infinispan/js-client/blob/main/lib/infinispan.js#L327 it looks like put's third argument opts can have property previous that makes it return the old value, so try:
const oldValue = client.put('key', 'value', { previous: true })

Related

Creating Test data for ArangoDB

Hi I would like to insert random test data into an edge collection called Transaction with the fields _id, Amount and TransferType with random data. I have written the following code below, but it is showing a syntax error.
FOR i IN 1..30000
INSERT {
_id: CONCAT('Transaction/', i),
Amount:RAND(),
Time:Rand(DATE_TIMESTAMP),
i > 1000 || u.Type_of_Transfer == "NEFT" ? u.Type_of_Transfer == "IMPS"
} INTO Transaction OPTIONS { ignoreErrors: true }
Your code has multiple issues:
When you are creating a new document you can either not specify the _key key and Arango will create one for you, or you specify one as a string to be used. _id as a key will be ignored.
RAND() produces a random number between 0 and 1, so it needs to be multiplied in order to make it into the range you want you might need to round it, if you need integer values.
DATE_TIMESTAMP is a function and you have given it as a parameter to the RAND() function which needs no parameter. But because it generates a numerical timestamp (milliseconds since 1970-01-01 00:00 UTC), actually it's not needed. The only thing you need is the random number generation shifted to a range that makes sense (ie: not in the 1970s)
The i > 1000 ... line is something I could only guess what it wanted to be. Here the key for the JSON object is missing. You are referencing a u variable that is not defined anywhere. I see the first two parts of a ternary operator expression (cond ? true_value : false_value) but the : is missing. My best guess is that you wanted to create a Type_of_transfer key with value of "NEFT" when i>1000 and "IMPS" when i<=1000
So, I rewrote your AQL and tested it
FOR i IN 1..30000
INSERT {
_key: TO_STRING(i),
Amount: RAND()*1000,
Time: ROUND(RAND()*100000000+1603031645000),
Type_of_Transfer: i > 1000 ? "NEFT" : "IMPS"
} INTO Transaction OPTIONS { ignoreErrors: true }

Mongoose/ Mongo: why does $set auto-sort the data keys on update? [duplicate]

If I create an object like this:
var obj = {};
obj.prop1 = "Foo";
obj.prop2 = "Bar";
Will the resulting object always look like this?
{ prop1 : "Foo", prop2 : "Bar" }
That is, will the properties be in the same order that I added them?
The iteration order for objects follows a certain set of rules since ES2015, but it does not (always) follow the insertion order. Simply put, the iteration order is a combination of the insertion order for strings keys, and ascending order for number-like keys:
// key order: 1, foo, bar
const obj = { "foo": "foo", "1": "1", "bar": "bar" }
Using an array or a Map object can be a better way to achieve this. Map shares some similarities with Object and guarantees the keys to be iterated in order of insertion, without exception:
The keys in Map are ordered while keys added to object are not. Thus, when iterating over it, a Map object returns keys in order of insertion. (Note that in the ECMAScript 2015 spec objects do preserve creation order for string and Symbol keys, so traversal of an object with ie only string keys would yield keys in order of insertion)
As a note, properties order in objects weren’t guaranteed at all before ES2015. Definition of an Object from ECMAScript Third Edition (pdf):
4.3.3 Object
An object is a member of the
type Object. It is an unordered collection of properties each of which
contains a primitive value, object, or
function. A function stored in a
property of an object is called a
method.
YES (but not always insertion order).
Most Browsers iterate object properties as:
Positive integer keys in ascending order (and strings like "1" that parse as ints)
String keys, in insertion order (ES2015 guarantees this and all browsers comply)
Symbol names, in insertion order (ES2015 guarantees this and all browsers comply)
Some older browsers combine categories #1 and #2, iterating all keys in insertion order. If your keys might parse as integers, it's best not to rely on any specific iteration order.
Current Language Spec (since ES2015) insertion order is preserved, except in the case of keys that parse as positive integers (eg "7" or "99"), where behavior varies between browsers. For example, Chrome/V8 does not respect insertion order when the keys are parse as numeric.
Old Language Spec (before ES2015): Iteration order was technically undefined, but all major browsers complied with the ES2015 behavior.
Note that the ES2015 behavior was a good example of the language spec being driven by existing behavior, and not the other way round. To get a deeper sense of that backwards-compatibility mindset, see http://code.google.com/p/v8/issues/detail?id=164, a Chrome bug that covers in detail the design decisions behind Chrome's iteration order behavior.
Per one of the (rather opinionated) comments on that bug report:
Standards always follow implementations, that's where XHR came from, and Google does the same thing by implementing Gears and then embracing equivalent HTML5 functionality. The right fix is to have ECMA formally incorporate the de-facto standard behavior into the next rev of the spec.
Property order in normal Objects is a complex subject in JavaScript.
While in ES5 explicitly no order has been specified, ES2015 defined an order in certain cases, and successive changes to the specification since have increasingly defined the order (even, as of ES2020, the for-in loop's order). Given is the following object:
const o = Object.create(null, {
m: {value: function() {}, enumerable: true},
"2": {value: "2", enumerable: true},
"b": {value: "b", enumerable: true},
0: {value: 0, enumerable: true},
[Symbol()]: {value: "sym", enumerable: true},
"1": {value: "1", enumerable: true},
"a": {value: "a", enumerable: true},
});
This results in the following order (in certain cases):
Object {
0: 0,
1: "1",
2: "2",
b: "b",
a: "a",
m: function() {},
Symbol(): "sym"
}
The order for "own" (non-inherited) properties is:
Positive integer-like keys in ascending order
String keys in insertion order
Symbols in insertion order
Thus, there are three segments, which may alter the insertion order (as happened in the example). And positive integer-like keys don't stick to the insertion order at all.
In ES2015, only certain methods followed the order:
Object.assign
Object.defineProperties
Object.getOwnPropertyNames
Object.getOwnPropertySymbols
Reflect.ownKeys
JSON.parse
JSON.stringify
As of ES2020, all others do (some in specs between ES2015 and ES2020, others in ES2020), which includes:
Object.keys, Object.entries, Object.values, ...
for..in
The most difficult to nail down was for-in because, uniquely, it includes inherited properties. That was done (in all but edge cases) in ES2020. The following list from the linked (now completed) proposal provides the edge cases where the order is not specified:
Neither the object being iterated nor anything in its prototype chain is a proxy, typed array, module namespace object, or host exotic object.
Neither the object nor anything in its prototype chain has its prototype change during iteration.
Neither the object nor anything in its prototype chain has a property deleted during iteration.
Nothing in the object's prototype chain has a property added during iteration.
No property of the object or anything in its prototype chain has its enumerability change during iteration.
No non-enumerable property shadows an enumerable one.
Conclusion: Even in ES2015 you shouldn't rely on the property order of normal objects in JavaScript. It is prone to errors. If you need ordered named pairs, use Map instead, which purely uses insertion order. If you just need order, use an array or Set (which also uses purely insertion order).
At the time of writing, most browsers did return properties in the same order as they were inserted, but it was explicitly not guaranteed behaviour so shouldn't have been relied upon.
The ECMAScript specification used to say:
The mechanics and order of enumerating the properties ... is not specified.
However in ES2015 and later non-integer keys will be returned in insertion order.
This whole answer is in the context of spec compliance, not what any engine does at a particular moment or historically.
Generally, no
The actual question is very vague.
will the properties be in the same order that I added them
In what context?
The answer is: it depends on a number of factors. In general, no.
Sometimes, yes
Here is where you can count on property key order for plain Objects:
ES2015 compliant engine
Own properties
Object.getOwnPropertyNames(), Reflect.ownKeys(), Object.getOwnPropertySymbols(O)
In all cases these methods include non-enumerable property keys and order keys as specified by [[OwnPropertyKeys]] (see below). They differ in the type of key values they include (String and / or Symbol). In this context String includes integer values.
Object.getOwnPropertyNames(O)
Returns O's own String-keyed properties (property names).
Reflect.ownKeys(O)
Returns O's own String- and Symbol-keyed properties.
Object.getOwnPropertySymbols(O)
Returns O's own Symbol-keyed properties.
[[OwnPropertyKeys]]
The order is essentially: integer-like Strings in ascending order, non-integer-like Strings in creation order, Symbols in creation order. Depending which function invokes this, some of these types may not be included.
The specific language is that keys are returned in the following order:
... each own property key P of O [the object being iterated] that is an integer index, in ascending numeric index order
... each own property key P of O that is a String but is not an integer index, in property creation order
... each own property key P of O that is a Symbol, in property creation order
Map
If you're interested in ordered maps you should consider using the Map type introduced in ES2015 instead of plain Objects.
As of ES2015, property order is guaranteed for certain methods that iterate over properties. but not others. Unfortunately, the methods which are not guaranteed to have an order are generally the most often used:
Object.keys, Object.values, Object.entries
for..in loops
JSON.stringify
But, as of ES2020, property order for these previously untrustworthy methods will be guaranteed by the specification to be iterated over in the same deterministic manner as the others, due to to the finished proposal: for-in mechanics.
Just like with the methods which have a guaranteed iteration order (like Reflect.ownKeys and Object.getOwnPropertyNames), the previously-unspecified methods will also iterate in the following order:
Numeric array keys, in ascending numeric order
All other non-Symbol keys, in insertion order
Symbol keys, in insertion order
This is what pretty much every implementation does already (and has done for many years), but the new proposal has made it official.
Although the current specification leaves for..in iteration order "almost totally unspecified, real engines tend to be more consistent:"
The lack of specificity in ECMA-262 does not reflect reality. In discussion going back years, implementors have observed that there are some constraints on the behavior of for-in which anyone who wants to run code on the web needs to follow.
Because every implementation already iterates over properties predictably, it can be put into the specification without breaking backwards compatibility.
There are a few weird cases which implementations currently do not agree on, and in such cases, the resulting order will continue be unspecified. For property order to be guaranteed:
Neither the object being iterated nor anything in its prototype chain is a proxy, typed array, module namespace object, or host exotic object.
Neither the object nor anything in its prototype chain has its prototype change during iteration.
Neither the object nor anything in its prototype chain has a property deleted during iteration.
Nothing in the object's prototype chain has a property added during iteration.
No property of the object or anything in its prototype chain has its enumerability change during iteration.
No non-enumerable property shadows an enumerable one.
In modern browsers you can use the Map data structure instead of a object.
Developer mozilla > Map
A Map object can iterate its elements in insertion order...
In ES2015, it does, but not to what you might think
The order of keys in an object wasn't guaranteed until ES2015. It was implementation-defined.
However, in ES2015 in was specified. Like many things in JavaScript, this was done for compatibility purposes and generally reflected an existing unofficial standard among most JS engines (with you-know-who being an exception).
The order is defined in the spec, under the abstract operation OrdinaryOwnPropertyKeys, which underpins all methods of iterating over an object's own keys. Paraphrased, the order is as follows:
All integer index keys (stuff like "1123", "55", etc) in ascending numeric order.
All string keys which are not integer indices, in order of creation (oldest-first).
All symbol keys, in order of creation (oldest-first).
It's silly to say that the order is unreliable - it is reliable, it's just probably not what you want, and modern browsers implement this order correctly.
Some exceptions include methods of enumerating inherited keys, such as the for .. in loop. The for .. in loop doesn't guarantee order according to the specification.
As others have stated, you have no guarantee as to the order when you iterate over the properties of an object. If you need an ordered list of multiple fields I suggested creating an array of objects.
var myarr = [{somfield1: 'x', somefield2: 'y'},
{somfield1: 'a', somefield2: 'b'},
{somfield1: 'i', somefield2: 'j'}];
This way you can use a regular for loop and have the insert order. You could then use the Array sort method to sort this into a new array if needed.
Major Difference between Object and MAP with Example :
it's Order of iteration in loop, In Map it follows the order as it was set while creation whereas in OBJECT does not.
SEE:
OBJECT
const obj = {};
obj.prop1 = "Foo";
obj.prop2 = "Bar";
obj['1'] = "day";
console.log(obj)
**OUTPUT: {1: "day", prop1: "Foo", prop2: "Bar"}**
MAP
const myMap = new Map()
// setting the values
myMap.set("foo", "value associated with 'a string'")
myMap.set("Bar", 'value associated with keyObj')
myMap.set("1", 'value associated with keyFunc')
OUTPUT:
**1. ▶0: Array[2]
1. 0: "foo"
2. 1: "value associated with 'a string'"
2. ▶1: Array[2]
1. 0: "Bar"
2. 1: "value associated with keyObj"
3. ▶2: Array[2]
1. 0: "1"
2. 1: "value associated with keyFunc"**
Just found this out the hard way.
Using React with Redux, the state container of which's keys I want to traverse in order to generate children is refreshed everytime the store is changed (as per Redux's immutability concepts).
Thus, in order to take Object.keys(valueFromStore) I used Object.keys(valueFromStore).sort(), so that I at least now have an alphabetical order for the keys.
For a 100% fail-safe solution you could use nested objects and do something like this:
const obj = {};
obj.prop1 = {content: "Foo", index: 0};
obj.prop2 = {content: "Bar", index: 1};
for (let i = 0; i < Object.keys(obj).length; i++)
for (const prop in obj) {
if (obj[prop].index == i) {
console.log(obj[prop].content);
break;
}
}
From the JSON standard:
An object is an unordered collection of zero or more name/value pairs, where a name is a string and a value is a string, number, boolean, null, object, or array.
(emphasis mine).
So, no you can't guarantee the order.

How can I ensure CassandraOperations.selectOneById() initializes all fields in the POJO?

I'm using Spring Data Cassandra 1.3.4.RELEASE to persist instances of a class that I have. The class is written in Groovy, but I don't think that really matters. I have implemented a CrudRepository, and I'm injecting an instance of CassandraOperations into the repo implementation class. I can insert, delete, and do most of the other operations successfully. However, there's a scenario I'm running into which breaks my test case. My entity class looks something like this:
#Table("foo")
class FooData {
#PrimaryKey("id")
long id
#Column("created")
long updated
#Column("name")
String name
#Column("user_data")
String userData
#Column("numbers")
List numberList = []
}
In my test case, I happened to only set a few fields like 'id' and 'updated' before calling CassandraOperations.insert(entity), so most of them were null in the entity instance at the time of insertion. But the numberList field was not null, it was an empty List. Directly after the insert(), I'm calling CassandraOperations.selectOneById(FooData.class, id). I get a FooData instance back, and the fields that were initialized when I saved it are populated with data. However, I was checking content equality in my test, and it failed because the empty list was not returned as an empty list in the POJO coming back from CassandraOperations.selectOneById(). It's actually null. I assume this may be some sort of Cassandra optimization. It seems to happen in the CGLIB code that instantiates the POJO/entity. Is this a known "feature"? Is there some annotation I can mark the 'numberList' field with to indicate that it cannot be null? Any leads are appreciated. Thanks.
In short
Cassandra stores empty collections as null and Spring Data Cassandra overwrites initialized fields.
Explanation
Cassandra list/set typed columns represent an empty collection as null. It does not matter whether the list/set (as viewed from Java/Groovy) was empty or null. Storing an empty list yields therefore in null. From here one can't tell whether the state was null or empty at the time saving the value.
Spring Data Cassandra overwrites all fields with values retrieved from the result set and so your pre-initialized fields is set to null.
I created a ticket DATACASS-266 to track the state of this issue.
Workaround
Spring Data uses setters if possible so you have a chance to intervene. A very simple null guard could be:
public void setMyList(List<Long> myList) {
if(myList == null){
this.myList = new ArrayList<>();
return;
}
this.myList = myList;
}
As important addition to mp911de answer you have to set #org.springframework.data.annotation.AccessType(AccessType.Type.PROPERTY) to make this solution work.

PubNub message format in callbacks

When I get a callback, I get an object passed in. The content of the object seems to have two levels of 'encoding'. It always seems to consist of 3 basic elements:
My data
Timestamp
Channel
in that order so [0]=data, [1] = timestamp and [2] = channel where timestamp and channel are PubNub supplied strings. My data comes in as a JSON object (string, numeric, or object etc.) in the first item returned.
But nowhere can I find in the documentation that this structure (i.e. 3 incoming 'objects') is actually defined. If it is defined then I should be able to map a type or class to it, to better handle it, i.e. cast it to a 'PubNubMessage' class [object data; string timestamp; string channel;]?
Can someone please point me at a document where this message format is actually defined?

getSubmittedValue() vs. getValue()

Is that correct:
When I query a value before validation (or if validation failed) I have to use getSubmittedValue();. Once the value is validated, even if I query it in another validation later in the page/control I have to use .getValue(); since getSubmittedValue(); returns null after successful validation?
This xsnippet makes it easier to handle this. It allows you to just call getComponentValue("inputText1") to get either value or submittedValue.
Here's the function for reference:
function getComponentValue(id){
  var field = getComponent(id);
  var value = field.getSubmittedValue();
  if( null == value ){
         // else not yet submitted
         value = field.getValue();
  }
 
  return value
}
There's a slightly easier way: if you're just expecting a simple single-value String, just call:
var compare = firstField.getValueAsString();
Otherwise, call:
var compare = com.ibm.xsp.util.FacesUtil.convertValue(facesContext, firstField);
The former calls the latter anyway, but is obviously a terser syntax. This does what you're looking for and more:
If the value hasn't yet been validated, returns the submitted value
If validation has already passed, returns the value after it's already been processed by any converters and / or content filters, so particularly in cases where you're trying to compare two field values, this should ensure that both values have been properly trimmed, etc., and is therefore less likely to return a false positive than just comparing the raw submitted values.
Found the answer here. So when you want to ensure that 2 text fields have the same value (use case: please repeat your email) and the first box already has a validation that might fail, you need to use submittedValue unless it is null, then you use the value. Code in the validation expression for the second field looks like this:
var firstField = getComponent("inputText1");
var compare = firstField.getSubmittedValue() || firstField.getValue();
compare == value;
You have to love it.

Resources