deep compare arrays in yui - yui

is there a recommended way to deep compare 2 arrays in yui3 tests (similar to QUnit's deepEqual)? I poked around the source and the best I could come up with was to steal this function from matrix/matrix.js (shown slightly rewritten) It could easily be modified to a recursive arbitrary depth comparison, but I'm digressing now...
function compare(list1, list2)
{
var i = 0, len = list1.length, len2 = list2.length, isEqual = len === len2;
if(isEqual) {
for(; i < len; ++i) {
if(list1[i] != list2[i]) {
isEqual = false; break;
}
}
}
return isEqual;
}

Yes, YUI Tests have an Test.ArrayAssert namespace where you have many assertions for arrays. In particular you have itemsAreEqual which tests with == and itemsAreEquivalent which uses ===. You get this for free by including the test module.
YUI().use('test', function (Y) {
var ArrayAssert = Y.Test.ArrayAssert;
var testCase = new Y.Test.Case({
name: "TestCase Name",
//traditional test names
testSomething : function () {
ArrayAssert.itemsAreEqual([1, 2, 3], foo, 'all items should be 1, 2, 3');
}
});
});

Related

Why is the first function call is executed two times faster than all other sequential calls?

I have a custom JS iterator implementation and code for measuring performance of the latter implementation:
const ITERATION_END = Symbol('ITERATION_END');
const arrayIterator = (array) => {
let index = 0;
return {
hasValue: true,
next() {
if (index >= array.length) {
this.hasValue = false;
return ITERATION_END;
}
return array[index++];
},
};
};
const customIterator = (valueGetter) => {
return {
hasValue: true,
next() {
const nextValue = valueGetter();
if (nextValue === ITERATION_END) {
this.hasValue = false;
return ITERATION_END;
}
return nextValue;
},
};
};
const map = (iterator, selector) => customIterator(() => {
const value = iterator.next();
return value === ITERATION_END ? value : selector(value);
});
const filter = (iterator, predicate) => customIterator(() => {
if (!iterator.hasValue) {
return ITERATION_END;
}
let currentValue = iterator.next();
while (iterator.hasValue && currentValue !== ITERATION_END && !predicate(currentValue)) {
currentValue = iterator.next();
}
return currentValue;
});
const toArray = (iterator) => {
const array = [];
while (iterator.hasValue) {
const value = iterator.next();
if (value !== ITERATION_END) {
array.push(value);
}
}
return array;
};
const test = (fn, iterations) => {
const times = [];
for (let i = 0; i < iterations; i++) {
const start = performance.now();
fn();
times.push(performance.now() - start);
}
console.log(times);
console.log(times.reduce((sum, x) => sum + x, 0) / times.length);
}
const createData = () => Array.from({ length: 9000000 }, (_, i) => i + 1);
const testIterator = (data) => () => toArray(map(filter(arrayIterator(data), x => x % 2 === 0), x => x * 2))
test(testIterator(createData()), 10);
The output of the test function is very weird and unexpected - the first test run is constantly executed two times faster than all the other runs. One of the results, where the array contains all execution times and the number is the mean (I ran it on Node):
[
147.9088459983468,
396.3472499996424,
374.82447600364685,
367.74555300176144,
363.6300039961934,
362.44370299577713,
363.8418449983001,
390.86111199855804,
360.23125199973583,
358.4788999930024
]
348.6312940984964
Similar results can be observed using Deno runtime, however I could not reproduce this behaviour on other JS engines. What can be the reason behind it on the V8?
Environment:
Node v13.8.0, V8 v7.9.317.25-node.28,
Deno v1.3.3, V8 v8.6.334
(V8 developer here.) In short: it's inlining, or lack thereof, as decided by engine heuristics.
For an optimizing compiler, inlining a called function can have significant benefits (e.g.: avoids the call overhead, sometimes makes constant folding possible, or elimination of duplicate computations, sometimes even creates new opportunities for additional inlining), but comes at a cost: it makes the compilation itself slower, and it increases the risk of having to throw away the optimized code ("deoptimize") later due to some assumption that turns out not to hold. Inlining nothing would waste performance, inlining everything would waste performance, inlining exactly the right functions would require being able to predict the future behavior of the program, which is obviously impossible. So compilers use heuristics.
V8's optimizing compiler currently has a heuristic to inline functions only if it was always the same function that was called at a particular place. In this case, that's the case for the first iterations. Subsequent iterations then create new closures as callbacks, which from V8's point of view are new functions, so they don't get inlined. (V8 actually knows some advanced tricks that allow it to de-duplicate function instances coming from the same source in some cases and inline them anyway; but in this case those are not applicable [I'm not sure why]).
So in the first iteration, everything (including x => x % 2 === 0 and x => x * 2) gets inlined into toArray. From the second iteration onwards, that's no longer the case, and instead the generated code performs actual function calls.
That's probably fine; I would guess that in most real applications, the difference is barely measurable. (Reduced test cases tend to make such differences stand out more; but changing the design of a larger app based on observations made on a small test is often not the most impactful way to spend your time, and at worst can make things worse.)
Also, hand-optimizing code for engines/compilers is a difficult balance. I would generally recommend not to do that (because engines improve over time, and it really is their job to make your code fast); on the other hand, there clearly is more efficient code and less efficient code, and for maximum overall efficiency, everyone involved needs to do their part, i.e. you might as well make the engine's job simpler when you can.
If you do want to fine-tune performance of this, you can do so by separating code and data, thereby making sure that always the same functions get called. For example like this modified version of your code:
const ITERATION_END = Symbol('ITERATION_END');
class ArrayIterator {
constructor(array) {
this.array = array;
this.index = 0;
}
next() {
if (this.index >= this.array.length) return ITERATION_END;
return this.array[this.index++];
}
}
function arrayIterator(array) {
return new ArrayIterator(array);
}
class MapIterator {
constructor(source, modifier) {
this.source = source;
this.modifier = modifier;
}
next() {
const value = this.source.next();
return value === ITERATION_END ? value : this.modifier(value);
}
}
function map(iterator, selector) {
return new MapIterator(iterator, selector);
}
class FilterIterator {
constructor(source, predicate) {
this.source = source;
this.predicate = predicate;
}
next() {
let value = this.source.next();
while (value !== ITERATION_END && !this.predicate(value)) {
value = this.source.next();
}
return value;
}
}
function filter(iterator, predicate) {
return new FilterIterator(iterator, predicate);
}
function toArray(iterator) {
const array = [];
let value;
while ((value = iterator.next()) !== ITERATION_END) {
array.push(value);
}
return array;
}
function test(fn, iterations) {
for (let i = 0; i < iterations; i++) {
const start = performance.now();
fn();
console.log(performance.now() - start);
}
}
function createData() {
return Array.from({ length: 9000000 }, (_, i) => i + 1);
};
function even(x) { return x % 2 === 0; }
function double(x) { return x * 2; }
function testIterator(data) {
return function main() {
return toArray(map(filter(arrayIterator(data), even), double));
};
}
test(testIterator(createData()), 10);
Observe how there are no more dynamically created functions on the hot path, and the "public interface" (i.e. the way arrayIterator, map, filter, and toArray compose) is exactly the same as before, only under-the-hood details have changed. A benefit of giving all functions names is that you get more useful profiling output ;-)
Astute readers will notice that this modification only shifts the issue away: if you have several places in your code that call map and filter with different modifiers/predicates, then the inlineability issue will come up again. As I said above: microbenchmarks tend to be misleading, as real apps typically have different behavior...
(FWIW, this is pretty much the same effect as at Why is the execution time of this function call changing? .)
Just to add to this investigation, I compared the OP's original code with the predicate and selector functions declared as separate functions as suggested by jmrk to two other implementations. So, this code has three implementations:
OP's code with predicate and selector functions declared separately as named functions (not inline).
Using standard array.map() and .filter() (which you would think would be slower because of the extra creation of intermediate arrays)
Using a custom iteration that does both filtering and mapping in one iteration
The OP's attempt at saving time and making things faster is actually the slowest (on average). The custom iteration is the fastest.
I guess the lesson here is that it's not necessarily intuitive how you make things faster with the optimizing compiler so if you're tuning performance, you have to measure against the "typical" way of doing things (which may benefit from the most optimizations).
Also, note that in the method #3, the first two iterations are the slowest and then it gets faster - the opposite effect from the original code. Go figure.
The results are here:
[
99.90320014953613,
253.79690098762512,
271.3091011047363,
247.94990015029907,
247.457200050354,
261.9487009048462,
252.95090007781982,
250.8520998954773,
270.42809987068176,
249.340900182724
]
240.59370033740998
[
222.14270091056824,
220.48679995536804,
224.24630093574524,
237.07260012626648,
218.47070002555847,
218.1493010520935,
221.50559997558594,
223.3587999343872,
231.1618001461029,
243.55419993400574
]
226.01488029956818
[
147.81360006332397,
144.57479882240295,
73.13350009918213,
79.41700005531311,
77.38950109481812,
78.40880012512207,
112.31539988517761,
80.87990117073059,
76.7899010181427,
79.79679894447327
]
95.05192012786866
The code is here:
const { performance } = require('perf_hooks');
const ITERATION_END = Symbol('ITERATION_END');
const arrayIterator = (array) => {
let index = 0;
return {
hasValue: true,
next() {
if (index >= array.length) {
this.hasValue = false;
return ITERATION_END;
}
return array[index++];
},
};
};
const customIterator = (valueGetter) => {
return {
hasValue: true,
next() {
const nextValue = valueGetter();
if (nextValue === ITERATION_END) {
this.hasValue = false;
return ITERATION_END;
}
return nextValue;
},
};
};
const map = (iterator, selector) => customIterator(() => {
const value = iterator.next();
return value === ITERATION_END ? value : selector(value);
});
const filter = (iterator, predicate) => customIterator(() => {
if (!iterator.hasValue) {
return ITERATION_END;
}
let currentValue = iterator.next();
while (iterator.hasValue && currentValue !== ITERATION_END && !predicate(currentValue)) {
currentValue = iterator.next();
}
return currentValue;
});
const toArray = (iterator) => {
const array = [];
while (iterator.hasValue) {
const value = iterator.next();
if (value !== ITERATION_END) {
array.push(value);
}
}
return array;
};
const test = (fn, iterations) => {
const times = [];
let result;
for (let i = 0; i < iterations; i++) {
const start = performance.now();
result = fn();
times.push(performance.now() - start);
}
console.log(times);
console.log(times.reduce((sum, x) => sum + x, 0) / times.length);
return result;
}
const createData = () => Array.from({ length: 9000000 }, (_, i) => i + 1);
const cache = createData();
const comp1 = x => x % 2 === 0;
const comp2 = x => x * 2;
const testIterator = (data) => () => toArray(map(filter(arrayIterator(data), comp1), comp2))
// regular array filter and map
const testIterator2 = (data) => () => data.filter(comp1).map(comp2);
// combine filter and map in same operation
const testIterator3 = (data) => () => {
let result = [];
for (let value of data) {
if (comp1(value)) {
result.push(comp2(value));
}
}
return result;
}
const a = test(testIterator(cache), 10);
const b = test(testIterator2(cache), 10);
const c = test(testIterator3(cache), 10);
function compareArrays(a1, a2) {
if (a1.length !== a2.length) return false;
for (let [i, val] of a1.entries()) {
if (a2[i] !== val) return false;
}
return true;
}
console.log(a.length);
console.log(compareArrays(a, b));
console.log(compareArrays(a, c));

How do I create test cases for given sample data in node.js?

I was dong hackerrank test. My code provides required output by providing input but test shows it is a wrong answer. The link for test is https://www.hackerrank.com/contests/fullstack/challenges/testrun
Input Format
1 2 3
Output Format
2 3 7
Sample Input
1 9 9
Sample Output
? ? ?
Explanation
function processData(input) {
//Enter your code here
var number;
var main = "";
const aray = input.split(' ').map(Number)
for (var i = 0; i < aray.length; i++) {
if (i === aray.length-1 && aray.length>1) {
if (aray[i]*2 + 1 >= 9) {
main += '?';
}
else {
main += aray[i]*2 + 1
}
}
else {
if (aray[i+1] >= 9) {
main += '?';
main += ' '
}
else {
main += aray[i] + 1;
main += ' '
}
}
}
console.log(main);
}
process.stdin.resume();
process.stdin.setEncoding("ascii");
_input = '';
process.stdin.on("data", function (input) {
_input += input;
});
process.stdin.on("end", function () {
processData(_input);
});
How do I create test cases? If you know then please mention the mistake. Thanks
If you just want to test a bunch of inputs, then write a bunch of function calls with different types of input.
Example:
processData([1, 2, 3]);
processData([4, 5, 6]);
processData([2, 2, 2]);
processData(["1", "2", "3"]);
// More tests here
If you want to automate providing input into your application, then something like robotjs might be useful.
If you're looking for a full on testing framework, then you can use mocha and chai. Here's an article on how to use them with node.

Why is the following a memory leak? [duplicate]

I've got code that looks like this:
for (std::list<item*>::iterator i=items.begin();i!=items.end();i++)
{
bool isActive = (*i)->update();
//if (!isActive)
// items.remove(*i);
//else
other_code_involving(*i);
}
items.remove_if(CheckItemNotActive);
I'd like remove inactive items immediately after update them, inorder to avoid walking the list again. But if I add the commented-out lines, I get an error when I get to i++: "List iterator not incrementable". I tried some alternates which didn't increment in the for statement, but I couldn't get anything to work.
What's the best way to remove items as you are walking a std::list?
You have to increment the iterator first (with i++) and then remove the previous element (e.g., by using the returned value from i++). You can change the code to a while loop like so:
std::list<item*>::iterator i = items.begin();
while (i != items.end())
{
bool isActive = (*i)->update();
if (!isActive)
{
items.erase(i++); // alternatively, i = items.erase(i);
}
else
{
other_code_involving(*i);
++i;
}
}
You want to do:
i= items.erase(i);
That will correctly update the iterator to point to the location after the iterator you removed.
You need to do the combination of Kristo's answer and MSN's:
// Note: Using the pre-increment operator is preferred for iterators because
// there can be a performance gain.
//
// Note: As long as you are iterating from beginning to end, without inserting
// along the way you can safely save end once; otherwise get it at the
// top of each loop.
std::list< item * >::iterator iter = items.begin();
std::list< item * >::iterator end = items.end();
while (iter != end)
{
item * pItem = *iter;
if (pItem->update() == true)
{
other_code_involving(pItem);
++iter;
}
else
{
// BTW, who is deleting pItem, a.k.a. (*iter)?
iter = items.erase(iter);
}
}
Of course, the most efficient and SuperCool® STL savy thing would be something like this:
// This implementation of update executes other_code_involving(Item *) if
// this instance needs updating.
//
// This method returns true if this still needs future updates.
//
bool Item::update(void)
{
if (m_needsUpdates == true)
{
m_needsUpdates = other_code_involving(this);
}
return (m_needsUpdates);
}
// This call does everything the previous loop did!!! (Including the fact
// that it isn't deleting the items that are erased!)
items.remove_if(std::not1(std::mem_fun(&Item::update)));
I have sumup it, here is the three method with example:
1. using while loop
list<int> lst{4, 1, 2, 3, 5};
auto it = lst.begin();
while (it != lst.end()){
if((*it % 2) == 1){
it = lst.erase(it);// erase and go to next
} else{
++it; // go to next
}
}
for(auto it:lst)cout<<it<<" ";
cout<<endl; //4 2
2. using remove_if member funtion in list:
list<int> lst{4, 1, 2, 3, 5};
lst.remove_if([](int a){return a % 2 == 1;});
for(auto it:lst)cout<<it<<" ";
cout<<endl; //4 2
3. using std::remove_if funtion combining with erase member function:
list<int> lst{4, 1, 2, 3, 5};
lst.erase(std::remove_if(lst.begin(), lst.end(), [](int a){
return a % 2 == 1;
}), lst.end());
for(auto it:lst)cout<<it<<" ";
cout<<endl; //4 2
4. using for loop , should note update the iterator:
list<int> lst{4, 1, 2, 3, 5};
for(auto it = lst.begin(); it != lst.end();++it){
if ((*it % 2) == 1){
it = lst.erase(it); erase and go to next(erase will return the next iterator)
--it; // as it will be add again in for, so we go back one step
}
}
for(auto it:lst)cout<<it<<" ";
cout<<endl; //4 2
Use std::remove_if algorithm.
Edit:
Work with collections should be like:
prepare collection.
process collection.
Life will be easier if you won't mix this steps.
std::remove_if. or list::remove_if ( if you know that you work with list and not with the TCollection )
std::for_each
The alternative for loop version to Kristo's answer.
You lose some efficiency, you go backwards and then forward again when deleting but in exchange for the extra iterator increment you can have the iterator declared in the loop scope and the code looking a bit cleaner. What to choose depends on priorities of the moment.
The answer was totally out of time, I know...
typedef std::list<item*>::iterator item_iterator;
for(item_iterator i = items.begin(); i != items.end(); ++i)
{
bool isActive = (*i)->update();
if (!isActive)
{
items.erase(i--);
}
else
{
other_code_involving(*i);
}
}
Here's an example using a for loop that iterates the list and increments or revalidates the iterator in the event of an item being removed during traversal of the list.
for(auto i = items.begin(); i != items.end();)
{
if(bool isActive = (*i)->update())
{
other_code_involving(*i);
++i;
}
else
{
i = items.erase(i);
}
}
items.remove_if(CheckItemNotActive);
Removal invalidates only the iterators that point to the elements that are removed.
So in this case after removing *i , i is invalidated and you cannot do increment on it.
What you can do is first save the iterator of element that is to be removed , then increment the iterator and then remove the saved one.
If you think of the std::list like a queue, then you can dequeue and enqueue all the items that you want to keep, but only dequeue (and not enqueue) the item you want to remove. Here's an example where I want to remove 5 from a list containing the numbers 1-10...
std::list<int> myList;
int size = myList.size(); // The size needs to be saved to iterate through the whole thing
for (int i = 0; i < size; ++i)
{
int val = myList.back()
myList.pop_back() // dequeue
if (val != 5)
{
myList.push_front(val) // enqueue if not 5
}
}
myList will now only have numbers 1-4 and 6-10.
Iterating backwards avoids the effect of erasing an element on the remaining elements to be traversed:
typedef list<item*> list_t;
for ( list_t::iterator it = items.end() ; it != items.begin() ; ) {
--it;
bool remove = <determine whether to remove>
if ( remove ) {
items.erase( it );
}
}
PS: see this, e.g., regarding backward iteration.
PS2: I did not thoroughly tested if it handles well erasing elements at the ends.
You can write
std::list<item*>::iterator i = items.begin();
while (i != items.end())
{
bool isActive = (*i)->update();
if (!isActive) {
i = items.erase(i);
} else {
other_code_involving(*i);
i++;
}
}
You can write equivalent code with std::list::remove_if, which is less verbose and more explicit
items.remove_if([] (item*i) {
bool isActive = (*i)->update();
if (!isActive)
return true;
other_code_involving(*i);
return false;
});
The std::vector::erase std::remove_if idiom should be used when items is a vector instead of a list to keep compexity at O(n) - or in case you write generic code and items might be a container with no effective way to erase single items (like a vector)
items.erase(std::remove_if(begin(items), end(items), [] (item*i) {
bool isActive = (*i)->update();
if (!isActive)
return true;
other_code_involving(*i);
return false;
}));
do while loop, it's flexable and fast and easy to read and write.
auto textRegion = m_pdfTextRegions.begin();
while(textRegion != m_pdfTextRegions.end())
{
if ((*textRegion)->glyphs.empty())
{
m_pdfTextRegions.erase(textRegion);
textRegion = m_pdfTextRegions.begin();
}
else
textRegion++;
}
I'd like to share my method. This method also allows the insertion of the element to the back of the list during iteration
#include <iostream>
#include <list>
int main(int argc, char **argv) {
std::list<int> d;
for (int i = 0; i < 12; ++i) {
d.push_back(i);
}
auto it = d.begin();
int nelem = d.size(); // number of current elements
for (int ielem = 0; ielem < nelem; ++ielem) {
auto &i = *it;
if (i % 2 == 0) {
it = d.erase(it);
} else {
if (i % 3 == 0) {
d.push_back(3*i);
}
++it;
}
}
for (auto i : d) {
std::cout << i << ", ";
}
std::cout << std::endl;
// result should be: 1, 3, 5, 7, 9, 11, 9, 27,
return 0;
}
I think you have a bug there, I code this way:
for (std::list<CAudioChannel *>::iterator itAudioChannel = audioChannels.begin();
itAudioChannel != audioChannels.end(); )
{
CAudioChannel *audioChannel = *itAudioChannel;
std::list<CAudioChannel *>::iterator itCurrentAudioChannel = itAudioChannel;
itAudioChannel++;
if (audioChannel->destroyMe)
{
audioChannels.erase(itCurrentAudioChannel);
delete audioChannel;
continue;
}
audioChannel->Mix(outBuffer, numSamples);
}

node.js - why this local function can modify global variables?

Here is my code:
var handleCondition = function(condition,params){
var dup_condition;
dup_condition = condition;
var isArray = function(obj) {
return Object.prototype.toString.call(obj) === '[object Array]';
};
var __replace = function(str){
var reg_slot = /^#(.+)/;
if(reg_slot.test(str) == true){
var ss = reg_slot.exec(str)[1];
return params[ss];
}else{
return str;
}
};
var compare = function(a){
var arr = a;
if(params != undefined){
for(var j =1;j<arr.length;j++){
arr[j] = __replace(arr[j]);
}
}
switch(arr[0]){
case "$eq":
case "==":
return (arr[1] == arr[2]);
default:
return (arr[1] == arr[2]);
}
};
if(isArray(dup_condition)){
var im = function (arr){
for(var i=0;i<3;i++){
if(isArray(arr[i])){
arr[i] = im(arr[i]);
}
}
return compare(arr);
};
var res = im(dup_condition);
return res;
}
};
/*Here are test data*/
var c = {
"beforeDNS":
["$eq","#host",["$eq",10,10]]
,
"afterDNS":["$match",/^10\.+/,"#ip"]
};
var params ={
host:"dd"
};
console.log(c["beforeDNS"]); // ==> ["$eq","#host",["$eq",10,10]]
handleCondition(c["beforeDNS"],params);
console.log(c["beforeDNS"]); // ==> ["$eq","dd",true]
handleCondition(c["beforeDNS"],params);
The first time I run the code with the expected result;
However , when I tried to run the function second time,to my surprise,the value of c["beforeDNS"] has changed unexpectedly!
In fact,I haven't write any code in my function to modify the value of this global variable,but it just changed.
So please help me find the reason of this mysterious result or just fix it.Thanks!
Your dup_condition variable isn't duping anything. It's just a reference to the argument you pass in.
Thus when you pass it to the im function, which modifies its argument in place, it is just referencing and modifying condition (which is itself a reference to the c["beforeDNS"] defined outside the function).
To fix this you might use slice or some more sophisticated method to actually dupe the arguments. slice, for example, would return a new array. Note though that this is only a shallow copy. References within that array would still refer to the same objects.
For example:
if (isArray(condition)) {
var dup_condition = condition.slice();
// ...
}
In javascript the objects are passed by reference. In other words, in handleCondition dup_condition still points to the same array. So, if you change it there you are actually changing the passed object. Here is a short example which illustrates the same thing:
var globalData = {
arr: [10, 20]
};
var handleData = function(data) {
var privateData = data;
privateData.arr.shift();
privateData.arr.push(30);
}
console.log(globalData.arr);
handleData(globalData);
console.log(globalData.arr);
The result of the script is:
[10, 20]
[20, 30]
http://jsfiddle.net/3BK4b/

Generators in Node.js - Fiber or pure JavaScript?

I am trying to implement generators in Node.js. I came across node-fiber and node-lazy. Node-lazy deals with arrays and streams, but does not generate lazy things inherently (except numbers).
While using fiber looks cleaner, it has its cons, and as such, I prefer pure Javascript with closures as it's more explicit. My question is: are there memory or perf problems using closure to generate an iterator?
As an example, I'm trying to iterate through a tree depth-first, for as long as the caller asks for it. I want to find 'waldo' and stop at the first instance.
Fiber:
var depthFirst = Fiber(function iterate(tree) {
tree.children.forEach(iterate);
Fiber.yield(tree.value);
});
var tree = ...;
depthFirst.run(tree);
while (true) {
if (depthFirst.run() === 'waldo')
console.log('Found waldo');
}
Pure JavaScript with closures:
function iterate(tree) {
var childIndex = 0;
var childIter = null;
var returned = false;
return function() {
if (!childIter && childIndex < tree.children.length)
childIter = iterate(tree.children[childIndex++]);
var result = null;
if (childIter && (result = childIter()))
return result;
if (!returned) {
returned = true;
return tree.value;
}
};
}
var tree = ...;
var iter = iterate(tree);
while (true) {
if (iter() === 'waldo')
console.log('found waldo');
}

Resources