I need to understand one behavior of AMD in dojo.
In the below example will statement 1 be always executed first and then statement 2 if ready or domReady! not used?
function test() {
var abc;
require(["dijit/registry"], function(registry){
//some modification of abc variable.
console.log("statement 1");----> statement 1
});
return abc;----> statement 2
}
Thanks in advance.
Nope... statement 1 will be fired once dijit/registry has been loaded. There is no guarantee that this will be the case when you reach statement 2.
Only the statements inside your require callback are ensured to fire in order.
The above is valid even if you use ready or domReady!
You do try the following to expose your function globally :
require(["dojo/_base/kernel", "dijit/registry"], function(kernel, registry){
kernel.global.test = function(){
var abc;
//some modification of abc variable.
console.log("statement 1");----> statement 1
return abc;----> statement 2
}
});
Related
Can anyone make sense of the difference in behaviour between the two snippets described below?
Snippet #1
{
function f() {return 1}
f = function() {return 2}
function f() {return 3}
}
console.log(f()); // yields 2
Snippet #2
{
function f() {return 1}
f = function() {return 2}
}
console.log(f()); // yields 1
This first snippet yields the result I would expect, based on my understanding of what the interpreter does:
During the declaration phase, the interpreter enters the block.
It identifies the first function declaration of f. Hoists the declaration to the module scope and hoists the initialization to the top of the block scope.
It then sees an assignment f = function() {return 2}, which it ignores, since it is still in declaration phase.
Moves on to the second declaration function f() {return 3}, repeating the process in 2., which effectively replaces the initialization of f.
During the evaluation phase, declarations are ignored. The interpeter enters the block and sees the assignment f = function() {return 2}, which sets the value of f across the whole module scope.
It exits the block and prints f(), which correctly yields 2.
The second snippet yields a bizarre result. I expected to still get 2 as a printed result, yet I get 1. The interpreter should do the same as before. During the evaluation, the last value of f should be f = function() {return 2}, as before. Any ideas?
Thanks in advance for any insights into this.
According to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function the behaviour of the function syntax in blocks is undefined or at least inconsistently implemented across browsers.
The block seems to affect where the function definition gets hoisted to. For example:
f = function() {return 2}
{
function f() {return 1}
}
console.log(f());
Outputs 1 because the later definition only seems to get hoisted to the top of its block. Whereas:
f = function() {return 2}
function f() {return 1}
console.log(f());
Outputs 2 because the second definition gets hoisted to the top, and therefore is the one that gets overwritten.
However, in strict mode the functions created by the statements are locally scoped, like you'd probably expect. So:
"use strict";
{
function f() {return 1}
}
console.log(f());
Throws a "f is not defined" error. While:
"use strict"
let f = function() {return 2} // The let is needed due to the strict mode
{
function f() {return 1}
}
console.log(f());
Now outputs 2 instead of 1, as the second function is a separate variable which now shadows the first function during the block, as opposed to changing it.
And lastly, I'm still not entirely sure what's happening with that last one. While:
{
function f() {return 1}
f = function() {return 2}
}
console.log(f());
Outputs 1 like you said, removing the block causes it to output 2 like you'd normally expect. And that's also the case if the block is only around the second definition.
However, I did find that using a function statement inside a block seems to create 2 variables, both with the same value, but one globally scoped and one locally scoped. This seems to be completely different to how variable assignments normally work, even when you use the older var keyword. So these all just either create a single local or global variable, but each will never produce both on its own:
{
// You'd only run one of these at a time by commenting all but 1 out
let a = 1; // Block
var a = 1; // Global
a = 1; // Global
this.a = 1; // Global
debugger;
}
And because there's a block scoped variable with the same name, it shadows the global inside the block. And so the second definition sets the block scoped variable instead of the global one. Which means that your second example doesn't work as expected.
The 1st only seems to work because the 3rd function definition seems to set this.f to match the block scoped f:
{
function f() {return 1}
f = function() {return 2}
debugger;
// ^ this.f is the last function, while the block scoped f is the second function
function f() {return 3}
// ^ This seems to make this.f match the block scoped f.
// Even though it shouldn't do anything here because it was hoisted up
debugger; // The values stay the same
}
debugger; // The values stay the same, but block scope is deleted
console.log(f()); // 2
So the somewhat satisfactory answer is that your second definitions in your snippets are block scoped, as opposed to globally scoped. On the other hand, the first definitions are simultaneously globally and block scoped (it makes 2 variables with the same value). The block scope value is the only one that gets directly overwritten by the second definition. And the reason the first snippet works is because apparently not all of the 3rd function gets hoisted. So after the second function definition, the 3rd function definition sets the value of the globally scoped function to the value of the block scoped function - for some reason. So that then means the second function gets called because that's the one in the global scope.
Well, that was interesting researching. Hope that helps. I'm assuming you want to know out of curiosity right? Because if you're relying on this behaviour, there's almost certainly a better way.
When I run this code I get push is not a function. I have gone over the code so many times and can't figure out where I went wrong. i have also read many of post and I still can't figure it out. I am new to programming and could use the help.
const fs= require('fs')
const getNotes = function() {
return 'This just returns get notes'
enter code here
};
const addNote = function (title, body) {
const notes = loadNotes()
notes.push({
title: title,
boby: body
})
saveNotes(notes)
};
const saveNotes = function (notes) {
const dataJSON = JSON.stringify(notes)
fs.writeFileSync('notes.json',dataJSON)
}
// Code below loads the notes. Above, addNote adds the note.
const loadNotes = function () {
try {
const dataBuffer = fs.readFileSync('notes.json')
const dataJSON= dataBuffer.toString()
return JSON.parse(dataJSON)
} catch (error) {
return('Note such file')
}
}
module.exports ={
getNotes: getNotes,
addNote: addNote
}
So, you have this:
const notes = loadNotes()
notes.push({
title: title,
boby: body
});
If you're getting an error that notes.push is not a function, then that is because loadNotes() is not return an array. That could be for a couple reasons:
JSON.parse(dataJson) successfully parses your json, but its top level object is not an array.
JSON.parse(dataJson) throws and you end up returning a string instead of an array.
You can fairly easily diagnose this by adding a console.log() statement like this:
const notes = loadNotes();
console.log(notes); // see what this shows
notes.push({
title: title,
boby: body
});
FYI, returning a string fromloadNotes()as an error really doesn't make much sense unless you're going to check for a string after calling that function. IMO, it would make more sense to either return null for an error or just let it throw. Both would be simpler and easier to check after calling loadNotes().
And, in either case, you must check for an error return value after calling loadNotes() unless you want loadNotes() to throw upon error like it is.
I'm using Jest and Enzyme to test a React component. Sometimes I find myself typing the same thing over and over using find(), so I attempted to DRY up my code as follows:
Repetitive Code
component.setProps({ something: 'hello' });
expect(component.find('SomeField').prop('disabled')).toBe(false);
component.setProps({ something: 'good' });
expect(component.find('SomeField').prop('disabled')).toBe(true);
component.setProps({ something: 'abc' });
expect(component.find('SomeField').prop('disabled')).toBe(false);
Attempt at DRY
const someField = component.find('SomeField');
component.setProps({ something: 'hello' });
expect(someField.prop('disabled')).toBe(false);
component.setProps({ something: 'good' });
expect(someField.prop('disabled')).toBe(true);
component.setProps({ something: 'abc' });
expect(someField.prop('disabled')).toBe(false);
Given that:
component = shallow(<MyComponent />);
However that approach doesn't work. Can someone explain why? Furthermore, it would be nice if anyone could suggest a way to drying that up, if that is possible.
The question is: What happens on setProps? Its the same what happens in your react component in the browser, every time the props changes, the render function is called. With const someField = component.find('SomeField'); you store one of the rendered child components, but calling setProps after this, will force the component to render new child components. So the stored one is not the one that your component change something on.
I would suggest to just leave the non dry function as it is to leave the test as simple as possible.
I am writing multi-threaded server that handles async read from many tcp sockets. Here is the section of code that bothers me.
void data_recv (void) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)),
boost::bind ( &RPC::on_data_recv, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (boost::system::error_code ec, std::size_t bytesRx) {
if ( rawDataW[bytesRx-1] == ENDMARKER { // <-- this code is fine
process_and_write_rawdata_to_file
}
else {
read_socket_until_endmarker // <-- HELP REQUIRED!!
process_and_write_rawadata_to_file
}
}
Nearly always the async_read_some reads in data including the endmarker, so it works fine. Rarely, the endmarker's arrival is delayed in the stream and that's when my program fails. I think it fails because I have not understood how boost bind works.
My first question:
I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
my second question:
In the on_data_recv() method, how do I read data from the same socket that was read in the on_data() method? In other words, how do I pass the socket as argument from calling method to the handler? when the handler is executed in another thread? Any help in form of a few lines of code that can fit into my "read_socket_until_endmarker" will be appreciated.
My first question: I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
In the example (and I'm assuming this holds for your functions as well) the start_accept() is a member function. The bind function is conveniently designed such that when you use & in front of its first argument, it interprets it as a member function that is applied to its second argument.
So while a code like this:
void foo(int x) { ... }
bind(foo, 3)();
Is equivalent to just calling foo(3)
Code like this:
struct Bar { void foo(int x); }
Bar bar;
bind(&foo, &bar, 3)(); // <--- notice the & before foo
Would be equivalent to calling bar.foo(3).
And thus as per your example
boost::bind ( &RPC::on_data_recv, this, // <--- notice & again
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
When this object is invoked inside Asio it shall be equivalent to calling this->on_data_recv(error, size). Checkout this link for more info.
For the second part, it is not clear to me how you're working with multiple threads, do you run io_service.run() from more than one thread (possible but I think is beyond your experience level)? It might be the case that you're confusing async IO with multithreading. I'm gonna assume that is the case and if you correct me I'll change my answer.
The usual and preferred starting point is to have just one thread running the io_service.run() function. Don't worry, this will allow you to handle many sockets asynchronously.
If that is the case, your two functions could easily be modified as such:
void data_recv (size_t startPos = 0) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)) + startPos,
boost::bind ( &RPC::on_data_recv, this,
startPos,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (size_t startPos,
boost::system::error_code ec,
std::size_t bytesRx) {
// TODO: Check ec
if (rawDataW[startPos + bytesRx-1] == ENDMARKER) {
process_and_write_rawdata_to_file
}
else {
// TODO: Error if startPos + bytesRx == 648*2
data_recv(startPos + bytesRx);
}
}
Notice though that the above code still has problems, the main one being that if the other side sent two messages quickly one after another, we could receive (in one async_read_some call) the full first message + part of the second message, and thus missing the ENDMARKER from the first one. Thus it is not enough to only test whether the last received byte is == to the ENDMARKER.
I could go on and modify this function further (I think you might get the idea on how), but you'd be better off using async_read_until which is meant exactly for this purpose.
Is there a conventional way to attempt a group of asserts to always be evaluated before failing the test?
Let's say my test assesses the presence of some names on a page:
var pageContent = 'dummy page content';
//.include(haystack, needle, [message])
//Asserts that haystack includes needle.
assert.include(pageContent, 'Alice');
assert.include(pageContent, 'Bob');
assert.include(pageContent, 'John');
Now, if Alice is missing, the test would fail with a single error:
>AssertionError: expected 'dummy page content' to contain 'Alice'
However I want to be notified that all three names are missing since in this case failing one condition does not prevent from evaluating others.
Rather than writing a wrapper method that aggregates the would-be output of these checks and throws a single error, I was hoping there would third-party library that "specializes" in this sort of thing or perhaps in-built functionality I'm overlooking.
I can offer two approaches.
The first one, mentioned by #Peter Lyons, relies on converting multiple assertions into a single assertion on multiple values. To keep the assertion error message useful, it's best to assert on the list of names:
var expected = ['Alice', 'Bob', 'John'];
var found = expected.filter(function(name) {
return pageContent.indexOf(name) >= 0;
}
// assuming Node's require('assert')
assert.deepEqual(found, expected);
// Example error message:
// AssertionError: ["Alice","Bob","John"] deepEqual ["Alice"]
The second approach uses "parameterised test". I'll assume you are using BDD-style for specifying test cases in my code.
describe('some page', function() {
for (var name in ['Alice', 'Bob', 'John'])
itContainsString(name);
function itContainsString(name) {
it('contains "' + name + '"', function() {
var pageContent = 'dummy page content';
assert.include(pageContent, 'Alice');
});
}
}
var found = ['Alice', 'Bob', 'John'].map(function (name) {
return pageContent.indexOf(name) >= 0;
});
assert.include(found, true);
If I may opine that your desire for a wrapper library for fuzzy asserting sounds misguided. Your fuzzy rules and heuristics about what is a "soft" vs "hard" assertion failure seem like a much less sensible alternative than good old programming using the existing assertion paradigm. It's testing. It's supposed to be straightforward and easy to reason about.
Keep in mind you can always take logic such as the above and wrap it in a function called includesOne(pageContent, needles) so it is conveniently reusable across tests.
Another approach to validating multiple assertions and getting feedback from all of them, regardless of which one fails first, is to use a Node.js module I wrote and published called multi-assert. Now, before we continue, I want to point out that, in your specific use case, if you're really only asserting 3 names, then I think Miroslav's answer is quite sufficient. As long as it serves your needs for quickly determining what broke and what to do to fix it, then you should continue on with that solution, and there's no need for additional dependencies in your project.
But the areas where I've run into major difficulties, or needed to spend more time debugging test failures or help others do the same, is when we've asserted properties of a really large object or array. For instance, when using deepEqual, I've personally run into cases where the error message can be quite complex, confusing, and unreadable in both HTML reports as well as the logs.
With the multi-assert module, the error messages are transparently displayed and specific to what we want to measure. For example, we can do something like this:
// import { multiAssert } from 'multi-assert';
const { assert } = require('chai');
const { multiAssert } = require('multi-assert');
describe('Page Content Tests', () => {
it('should contain Alice, Bob, and John somewhere in the content', () => {
var pageContent = 'dummy page content';
//.include(haystack, needle, [message])
//Asserts that haystack includes needle.
multiAssert([
() => assert.include(pageContent, 'Alice'),
() => assert.include(pageContent, 'Bob'),
() => assert.include(pageContent, 'John')
]);
});
});
And by running these tests, with "dummy page content", we're going to see the following error messages reported back to us with full transparency:
Page Content Tests
1) should contain Alice, Bob, and John somewhere in the content
0 passing (7ms)
1 failing
1) Page Content Tests
should contain Alice, Bob, and John somewhere in the content:
AssertionError:
MultipleAssertionError: expected 'dummy page content' to include 'Alice'
at /Users/user123/Dev/page-content-example/test/page-content.spec.js:13:20
at /Users/user123/Dev/page-content-example/node_modules/multi-assert/src/multi-assert.js:10:13
at Array.forEach (<anonymous>)
MultipleAssertionError: expected 'dummy page content' to include 'Bob'
at /Users/user123/Dev/page-content-example/test/page-content.spec.js:14:20
at /Users/user123/Dev/page-content-example/node_modules/multi-assert/src/multi-assert.js:10:13
at Array.forEach (<anonymous>)
MultipleAssertionError: expected 'dummy page content' to include 'John'
at /Users/user123/Dev/page-content-example/test/page-content.spec.js:15:20
at /Users/user123/Dev/page-content-example/node_modules/multi-assert/src/multi-assert.js:10:13
at Array.forEach (<anonymous>)
at multiAssert (node_modules/multi-assert/src/multi-assert.js:19:15)
at Context.<anonymous> (test/page-content.spec.js:12:5)
at processImmediate (node:internal/timers:466:21)
I want to also note that the multi-assert module also works with other testing frameworks; it's not just limited to Mocha and Chai; however, it's worth noting that Jasmine has been evaluating soft assertions by default for quite some time due to the nature of how their test runner and assertion library are more tightly integrated. If switching to Jasmine is not an easy or desired solution, and if existing assertion methods alone don't provide the desired level of feedback, then you can see the simplicity in wrapping up existing assertions in a multiAssert function call in order to achieve this transparency in your test cases. Hoping this helps!