What is the difference between these two jQuery references and why does one not work with .live? - jquery-1.4

EmployeeId is the id of a select element with a set of options. This approach will not work:
var tar = document.getElementById("EmployeeId");
$(tar).live("click", function(){
console.log("Changed");
});
However, this approach does:
$("#EmployeeId").live("click", function(){
console.log("Changed");
});
What is the difference between $("#EmployeeId") and $(tar)?? I was under the impression there was no difference between the two. Moreover, when I try
console.log($(tar));
console.log($("#EmployeeId"));
The same exact thing is sent to the console.
What am I missing, what is different, why is one approach not attaching the event handler?

.live needs the selector to work with, since it binds the event handler to document and then tests whether the origin (or any element in the path) of the event matches the selector.
If no selector is provided, it won't work. That's why chaining methods like $('foo').children().live(..) does not work either.
Since jQuery 1.7, .live is deprecated for various reasons listed in its documentation: http://api.jquery.com/live/.
Alternatives are .on (1.7) and .delegate (1.4.2).

Related

Ensure a Callback is Complete in Mongo Node Driver

I am a bit new to JavaScript web dev, and so am still getting my head around the flow of asynchronous functions, which can be a bit unexpected to the uninitiated. In my particular use case, I want execute a routine on the list of available databases before moving into the main code. Specifically, in order to ensure that a test environment is always properly initialized, I am dropping a database if it already exists, and then building it from configuration files.
The basic flow I have looks like this:
let dbAdmin = client.db("admin").admin();
dbAdmin.listDatabases(function(err, dbs){/*Loop through DBs and drop relevant one if present.*/});
return await buildRelevantDB();
By peppering some console.log() items throughout, I have determined that the listDatabases() call basically puts the callback into a queue of sorts. I actually enter buildRelevantDB() before entering the callback passed to listDatabases. In this particular example, it seems to work anyway, I think because the call that reads the configuration file is also asynchronous and so puts items into the same queue but later, but I find this to be brittle and sloppy. There must be some way to ensure that the listDatabases portion resolves before moving forward.
The closest solution I found is here, but I still don't know how to get the callback I pass to listDatabases to be like a then as in that solution.
Mixing callbacks and promises is a bit more advanced technique, so if you are new to javascript try to avoid it. In fact, try to avoid it even if you already learned everything and became a js ninja.
Dcumentation for listDatabases says it is async, so you can just await it without messing up with callbacks:
const dbs = await dbAdmin.listDatabases();
/*Loop through DBs and drop relevant one if present.*/
The next thing, there is no need to await before return. If you can await within a function, it is async and returns a promise anyway, so just return the promise from buildRelevantDB:
return buildRelevantDB();
Finally, you can drop database directly. No need to iterate over all databases to pick one you want to drop:
await client.db(<db name to drop>).dropDatabase();

How to read nodejs documentation regarding callback parameters (node v8.x)

I am trying to understand the node.js documentation specifically for the https.get() method. https://nodejs.org/dist/latest-v8.x/docs/api/https.html#https_https_get_options_callback
What is unclear to me is the callback. The example in the document indicates the callback can take a res (response) object as its parameter but I am unsure if this is the only parameter it can take or more importantly where I can find the definition of the res object so I can know what properties and methods I can access on this object.
Is there a straightforward way to identify this?
I have read this thread: Trying to understand nodejs documentation. How to discover callback parameters and the answers seem to suggest that if there is a non-error argument that a callback can take it will be documented, but I am assuming that answer is outdated.
I've run into the same issue with many Node/NPM packages. Documentation sometimes does not describe the parameters well.
So, welcome to JavaScript in 2018! It's gotten a lot better, though, to be honest.
My go-to method is to try the methods and dump the information myself.
Try a console.dir(res) in your callback:
https.get('https://encrypted.google.com/', (res) => {
console.dir(res);
});
Alternatively, you can set a breakpoint in the callback and inspect it yourself. You can then probe the arguments object* to see what else, if anything, was passed as an argument, or do another console dump:
https.get('https://encrypted.google.com/', function (res) {
console.dir("args:", arguments);
console.dir("res:", res);
});
EDIT: Wait, apparently the arguments variable is not available to arrow functions, fixed the second example.
*From MDN:
The arguments object is not an Array. It is similar to an Array, but
does not have any Array properties except length.
From your link https://nodejs.org/dist/latest-v8.x/docs/api/https.html#https_https_get_options_callback, you can see that it works like the http version :
Like http.get() but for HTTPS.
With http.get() clickable.
On that page (https://nodejs.org/dist/latest-v8.x/docs/api/http.html#http_http_get_options_callback), we can see this :
The callback is invoked with a single argument that is an instance of http.IncomingMessage
With http.IncomingMessage clickable, linking this page :
https://nodejs.org/dist/latest-v8.x/docs/api/http.html#http_class_http_incomingmessage
I agree the Node documentation is not very clear about the callbacks in general, and that is a shame. You can still use IDEs with good intellisense (and JSDoc to identify the type of the function parameters), like VSCode.
Or you can use a debugger, always works :)
Edit: If you want to see all the parameters sent to a function, you can use the spread syntax like this :
function foo(...params) {
// Here params is an array containing all the parameters that were sent to the function
}
If you want the absolute truth, you can look at the implementation. Though that's fairly time consuming.
If you find that the documentation is wrong, or in this case could be improved by adding a sentence about the callback parameter to https.get(), please open an issue, or, better yet, a pull request. This is where the change needs to be made:
https://github.com/nodejs/node/blob/67790962daccb5ff19c977119d7231cbe175c206/doc/api/https.md

Would the values inside request be mixed up in callback?

I am new to Node.js, and I have been reading questions and answers related with this issue, but still not very sure if I fully understand the concept in my case.
Suggested Code
router.post('/test123', function(req, res) {
someAsyncFunction1(parameter1, function(result1) {
someAsyncFunction2(parameter2, function(result2) {
someAsyncFunction3(parameter3, function(result3) {
var theVariable1 = req.body.something1;
var theVariable2 = req.body.something2;
)}
)}
});
Question
I assume there will be multiple (can be 10+, 100+, or whatever) requests to one certain place (for example, ajax request to /test123, as shown above) at the same time with some variables (something1 and something2). According to this, it would be impossible that one user's theVariable1 and theVariable2 are mixed up with (i.e, overwritten by) the other user's req.body.something1 and req.body.something2. I am wondering if this is true when there are multiple callbacks (three like the above, or ten, just in case).
And, I also consider using res.locals to save some data from callbacks (instead of using theVariable1 and theVariable2, but is it good idea to do so given that the data will not be overwritten due to multiple simultaneous requests from clients?
Each request an Node.js/Express server gets generated a new req object.
So in the line router.post('/test123', function(req, res), the req object that's being passed in as an argument is unique to that HTTP connection.
You don't have to worry about multiple functions or callbacks. In a traditional application, if I have two objects cat and dog that I can pass to the listen function, I would get back meow and bark. Even though there's only one listen function. That's sort of how you can view an Express app. Even though you have all these get and post functions, every user's request is passed to them as a unique entity.

attaching Jquery on partially downloaded DOM [duplicate]

Essentially I want to have a script execute when the contents of a DIV change. Since the scripts are separate (content script in the Chrome extension & webpage script), I need a way simply observe changes in DOM state. I could set up polling but that seems sloppy.
For a long time, DOM3 mutation events were the best available solution, but they have been deprecated for performance reasons. DOM4 Mutation Observers are the replacement for deprecated DOM3 mutation events. They are currently implemented in modern browsers as MutationObserver (or as the vendor-prefixed WebKitMutationObserver in old versions of Chrome):
MutationObserver = window.MutationObserver || window.WebKitMutationObserver;
var observer = new MutationObserver(function(mutations, observer) {
// fired when a mutation occurs
console.log(mutations, observer);
// ...
});
// define what element should be observed by the observer
// and what types of mutations trigger the callback
observer.observe(document, {
subtree: true,
attributes: true
//...
});
This example listens for DOM changes on document and its entire subtree, and it will fire on changes to element attributes as well as structural changes. The draft spec has a full list of valid mutation listener properties:
childList
Set to true if mutations to target's children are to be observed.
attributes
Set to true if mutations to target's attributes are to be observed.
characterData
Set to true if mutations to target's data are to be observed.
subtree
Set to true if mutations to not just target, but also target's descendants are to be observed.
attributeOldValue
Set to true if attributes is set to true and target's attribute value before the mutation needs to be recorded.
characterDataOldValue
Set to true if characterData is set to true and target's data before the mutation needs to be recorded.
attributeFilter
Set to a list of attribute local names (without namespace) if not all attribute mutations need to be observed.
(This list is current as of April 2014; you may check the specification for any changes.)
Edit
This answer is now deprecated. See the answer by apsillers.
Since this is for a Chrome extension, you might as well use the standard DOM event - DOMSubtreeModified. See the support for this event across browsers. It has been supported in Chrome since 1.0.
$("#someDiv").bind("DOMSubtreeModified", function() {
alert("tree changed");
});
See a working example here.
Many sites use AJAX/XHR/fetch to add, show, modify content dynamically and window.history API instead of in-site navigation so current URL is changed programmatically. Such sites are called SPA, short for Single Page Application.
Usual JS methods of detecting page changes
MutationObserver (docs) to literally detect DOM changes.
Info/examples:
How to change the HTML content as it's loading on the page
Performance of MutationObserver to detect nodes in entire DOM.
Lightweight observer to react to a change only if URL also changed:
let lastUrl = location.href;
new MutationObserver(() => {
const url = location.href;
if (url !== lastUrl) {
lastUrl = url;
onUrlChange();
}
}).observe(document, {subtree: true, childList: true});
function onUrlChange() {
console.log('URL changed!', location.href);
}
Event listener for sites that signal content change by sending a DOM event:
pjax:end on document used by many pjax-based sites e.g. GitHub,
see How to run jQuery before and after a pjax load?
message on window used by e.g. Google search in Chrome browser,
see Chrome extension detect Google search refresh
yt-navigate-finish used by Youtube,
see How to detect page navigation on YouTube and modify its appearance seamlessly?
Periodic checking of DOM via setInterval:
Obviously this will work only in cases when you wait for a specific element identified by its id/selector to appear, and it won't let you universally detect new dynamically added content unless you invent some kind of fingerprinting the existing contents.
Cloaking History API:
let _pushState = History.prototype.pushState;
History.prototype.pushState = function (state, title, url) {
_pushState.call(this, state, title, url);
console.log('URL changed', url)
};
Listening to hashchange, popstate events:
window.addEventListener('hashchange', e => {
console.log('URL hash changed', e);
doSomething();
});
window.addEventListener('popstate', e => {
console.log('State changed', e);
doSomething();
});
P.S. All these methods can be used in a WebExtension's content script. It's because the case we're looking at is where the URL was changed via history.pushState or replaceState so the page itself remained the same with the same content script environment.
Another approach depending on how you are changing the div.
If you are using JQuery to change a div's contents with its html() method, you can extend that method and call a registration function each time you put html into a div.
(function( $, oldHtmlMethod ){
// Override the core html method in the jQuery object.
$.fn.html = function(){
// Execute the original HTML method using the
// augmented arguments collection.
var results = oldHtmlMethod.apply( this, arguments );
com.invisibility.elements.findAndRegisterElements(this);
return results;
};
})( jQuery, jQuery.fn.html );
We just intercept the calls to html(), call a registration function with this, which in the context refers to the target element getting new content, then we pass on the call to the original jquery.html() function. Remember to return the results of the original html() method, because JQuery expects it for method chaining.
For more info on method overriding and extension, check out http://www.bennadel.com/blog/2009-Using-Self-Executing-Function-Arguments-To-Override-Core-jQuery-Methods.htm, which is where I cribbed the closure function. Also check out the plugins tutorial at JQuery's site.
In addition to the "raw" tools provided by MutationObserver API, there exist "convenience" libraries to work with DOM mutations.
Consider: MutationObserver represents each DOM change in terms of subtrees. So if you're, for instance, waiting for a certain element to be inserted, it may be deep inside the children of mutations.mutation[i].addedNodes[j].
Another problem is when your own code, in reaction to mutations, changes DOM - you often want to filter it out.
A good convenience library that solves such problems is mutation-summary (disclaimer: I'm not the author, just a satisfied user), which enables you to specify queries of what you're interested in, and get exactly that.
Basic usage example from the docs:
var observer = new MutationSummary({
callback: updateWidgets,
queries: [{
element: '[data-widget]'
}]
});
function updateWidgets(summaries) {
var widgetSummary = summaries[0];
widgetSummary.added.forEach(buildNewWidget);
widgetSummary.removed.forEach(cleanupExistingWidget);
}

Remove and restore Scope from digest cycles

Is there a way to remove a scope from the digest cycles? In other words, to suspend/resume a scope digest cycle?
In my case, I have all pages already loaded, but not all of them visible. So I'd like to suspend the ones that aren't visible to avoid useless processing. I don't want to use ng-view + $route, I don't want/need deep-linking.
I saw this thread and arrived to this fiddle. It probably does the work, but it's pretty invasive and not very framework-update-friendly.
Is there any other solution like a $scope.suspend() and scope.resume()? Or a less invasive one (from framework perspective)? I'm currently thinking about $destroy and $compile cycles.
I've ran into the same problem and I found an interesting solution that doesn't interfere (too much) with AngularJS. Add this to the scopes you want to disable:
var watchers;
scope.$on('suspend', function () {
watchers = scope.$$watchers;
scope.$$watchers = [];
});
scope.$on('resume', function () {
scope.$$watchers = watchers;
watchers = null;
});
Then, you can disable a scope and its children with: scope.$broadcast('suspend') and bring it back with scope.$broadcast('resume').
As the framework stands today there are no methods to suspend / resume digest on a scope. Having said this there are several techniques that one can use to limit number of watches that are executed as part of a digest cycle.
First of all, if parts of a screen are hidden anyway you could use the ng-switch family of directives thus removing invisible parts completely from the DOM.
Secondly, if a digest cycle is triggered from your directive via $apply and you want to limit watches re-evaluation to child scopes you could call $digest instead of $apply.
Then, yes, one could destroy and re-create scopes as described in the discussion you've linked to. But, if you are already hiding parts of the DOM it sounds like ng-switch might be a better option.

Resources