Does one-way binding imply there are no watchers using a setTimeout function? - two-way-binding

From what I am reading, two-way binding means there are "watchers" that implement a setTimeout in JavaScript every so many milliseconds to keep the model and view inline with each other.
In one-way binding, once the view is populated, are the events that are assigned to an element what triggers syncing between model and view, thus no "watchers"? My reason for asking is I do not believe using "watchers" is wise, but this will help me verify. The browsers JS event loop seems to be sufficient for events.
(Please assume Angular, React, and Vue, if it is necessary to answer the question.)
I have read this,
One-Way or One-Time binding in Angular and Watchers

Related

Server-side rendering with ReactDOM.hydrate

hydrate has landed to React 16, but its use isn't documented yet.
This article suggests that it is is supposed to be used with renderToNodeStream but doesn't give much details.
What is the expected usage of hydrate?
renderToString is synchronous. It also cannot handle re-rendered components, i.e. when synchronous (i.e. same tick) changes happen in component state during initial rendering and are supposed to trigger additional render calls. The example is Helmet that requires a workaround in order to propagate changes from nested Helmet back to top-level component on server side.
Can hydrate and renderToNodeStream help to avoid renderToString limitations and render asynchronous and/or re-rendered components on server side?
hydrate's usage is not limited to renderToNodeStream - you can (actually should) also use it with the classical renderToString. renderToNodeStream is essentially the same as renderToString with exception that it produces an http stream instead of a string. This means that you can send the rendered html to the client byte by byte during rendering, contrary to the standard renderToString, when you have to wait for the whole html string to be rendered first, and only after can you send it to the client.
ReactDOM.hydrate is a replacement for standard ReactDOM.render. The basic (and only?) difference is that, contrary to ReactDOM.render, it doesn't throw away all the DOM if React's checksum on the client doesn't match the one calculated on the server. It tries to attach React client app to the server-rendered DOM even if there are some subtle differences, by patching just the differing parts.
Due to the streaming nature of renderToNodeStream, Helmet's server-side usage is practically impossible in the current state of the library - the head part of the DOM is sent to the server by the time React get's to compute the DOM including Helmet's components. The stream can't just revert and append Helmet's changes to head.
So to sum, answering your question - renderToNodeStream solves the problem of synchronous rendering to string by sending stream, but it introduces new problem of not being able to patch the pushed content if some further part of the React App requires it. It doesn't add anything in terms of state changing and rerendering on the server side. On the other hand, hydrate doesn't introduce anything new in this topic - it's just a tuned up, more forgiving version of the old render.
The official docs explain a lot! https://reactjs.org/docs/react-dom.html

Auto complete with AJAX or Socket.io?

I'm building a search app using Node.js and Express, and I want to add an autocomplete feature. Previously I've used Socket.io to build a chat app so Socket.io came out in my mind first.
But I did some research and it looks like many people are using AJAX for autocomplete, so what are the difference between the two implementations?
I don't really have much experience with TCP and HTTP protocols so I would really appreciate clear and simple answers for noobs :)
First thing first, your use case is to create an autocomplete feature. What that means? that means when you are inserting a letter in your input field you will request a server with the term you want to find to receive all the autocomplete values.
As a developer when you read this feature details, you should have in mind the word event in our case the keypress event. So each time this event is triggered you want to request the server to get the autocomplete list.
What possibilities do you have to do that ?
First most commonly used for this type of scenarios is a simple ajax call, which will send an request and when finished will update the autocomplete with the corresponding details. As we see in this case for each letter typed, a request potentially can be made (usually you can implement a debounce function) to reduce the numbers of calls. The good think here is that you will close the connection once you received your details, and there are million of plugins with jquery which are doing that just fine.
Second approach is to use socket.io which also is a viable option, you will open your connection once, and for each keypress event you will emit your get details which will be usually faster cause you will reuse the existing connection. The con part here is that you will need to create it by yourself I do not know any plugins which are implementing autocomplete with socket.io.
Conclusion
Socket.io
faster due to reuse of existing connection
more labor work, very little plugins|extensions
good for the case when you already using socket.io on your app
overkill just for the autocomplete feature
Ajax
slower in comparison with socket.io
tons of plugins
overall be a good solution for this use case.
Socket.io/Websockets are primarily for real-time interactions between the server and the client(s). Socket.io also require a constant connection and more setup to have the server respond to a single client. Either way the speed will primarily be dependent on server processing. In the case of a search autocomplete, where you're literally sending a request to the server and expecting a single response back to the requesting client, I'd personally go with the AJAX route. This question has a few good answers that go into detail about this a bit more: What is the disadvantage of using websocket/socket.io where ajax will do?

In an isomorphic Redux app, is it better practice to keep API calls small, or to send over all information in one go?

I am building a sports data visualization application with server-side rendering in React (ES6)/Redux/React-Router-Redux. At the top, there is a class-based App component, and there are two different class-based component routes. (everything under those is a stateless functional component), structured as follows:
App
|__ Index (/)
|__ Match (/match/:id)
When a request is made for a given route, one API call is dispatched, containing all information for the given route. This is hosted on a different server, where we're using Restify and Sequelize ORM. The JSON object returned is roughly 12,000 to 30,000 lines long and takes anywhere from 500ms to 8500ms to return.
Our application, therefore, takes a long time to load, and I'm thinking that this is the main bottleneck. I have a couple options in mind.
Separate this huge API call into many smaller API calls. Although, since JS is single-threaded, I'd have to measure the speed of the render to find out if this is viable.
Attempt lazy loading by dispatching a new API call when a new tab is clicked (each match has several games, all in new tabs)
Am I on the right track? Or is there a better option? Thanks in advance, and please let me know if you need any more examples!
This depends on many things including who your target client is. Would mobile devices ever use this or strictly desktop?
From what you have said so far, I would opt for "lazy loading".
Either way you generally never want any app to force a user to wait at all especially not over 8 seconds.
You want your page send and show up with something that works as quick as possible. This means you don't want to have to wait until all data resolves before your UI can be hydrated. (This is what will have to happen if you are truly server side rendering because in many situations your client application would be built and delivered at least a few seconds before the data is resolved and sent over the line.)
If you have mobile devices with spotty networks connections they will likely never see this page due to timeouts.
It looks like paginating and lazy loading based on accessing other pages might be a good solution here.
In this situation you may also want to look into persisting the data and caching. This is a pretty big undertaking and might be more complicated than you would want. I know some colleagues who might use libraries to handle most of this stuff for them.

Is there a way to run custom code on Azure Cache expiration? (where last cached value is accessible)

What I mean is a kind of event or callback which is called when some cached value is expiring. Supposedly this callback should be given the currenlty cached value, for example, to store it somewhere else apart from caching.
To find such a way, I have reviewed Notifications option, but they look like they are applicable for explicit actions with cache like adding or removing, whereas expiration is a kind of thing that occurs implicitly. I found out that none of these callbacks is not called many minutes after cache value has expired and has become null, while it is called normally within polling interval if I call DataCache.Remove explicitly (wrong, see update below).
I find this behavior strange as ASP.Net has such callback. You can even find an explanation how to utilize it here on SO.
Also, I tried DataCache Events. It is writtent in MSDN that literally
This API supports the .NET Framework infrastructure and is not intended to be used directly from your code.
Nevertheless I created a handler for these event to see if I can test its args like CacheOperationStartedEventArgs.OperationType == CacheOperationType.ClearCache but it seemed to be in vain.
At the moment, I started to think about workarounds of this issue of the lack of required callback. So suggestions how to implement them are welcome too.
UPDATE. After more attentive and patient testing I found out that notification with DataCacheOperations.ReplaceItem is sent after expiration. Regrettably, I did not find the way to get the value that was cached before the expiration had occurred.

How should one initialize a page in backbone.js, to minimize HTTP requests and latency

I am learning about backbone, and from the examples and tutorials I have gotten this impression:
The initial GET / returns the skeleton page with backbone and the view templates.
Backbone uses the REST API to flesh out the page by GETting the data it needs and adding it into the DOM.
This works, but it seems wasteful in terms of additional HTTP requests, and latency from the perspective of the end user (minimum two round-trips before the page is visible. Actually three in my case, since the API must first ask which widgets are available, and then fetch the details on any available widgets....).
Is there an established, standard method for getting around this? My current candidates are:
Ignore the problem.
Embed the initialization data directly into the original page via inline javascript.
Render the page as if backbone didn't exist. When backbone finishes initializing, it will (hopefully) be in sync with the page as the user sees it. It can correct anything it needs to if things changed in the intervening couple seconds, but at least the user is not left hanging.
Another solution I haven't thought of?
Is there an established way to do this? Is it situation-specific? I am using Node / JS / Express.
Update: I think I found a solution, possibly the "accepted" solution, in Backbone's documentation:
Here's an example using reset to bootstrap a collection during initial page load, in a Rails application.
<script>
var Accounts = new Backbone.Collection;
Accounts.reset(<%= #accounts.to_json %>);
</script>
Calling collection.reset() without passing any models as arguments will empty the entire collection.

Resources