I know that a each tab has its own process but is each tab also running in a single thread?
My assumption would be that the javascript has it's own thread and another thread for doing things such as http calls.
I'm doing a test to try and demo this by doing an http call with a lag time however I notice that when the http call actually happens I see no evidence of it in the devtools till I get a reply which would be consistent with single threaded unless it was designed that way. Anyone have any insight into this?
Javascript
this.http.get(this.staffUrl)
.toPromise()
.then(response => response.json() as Staff[])
.catch(this.handleError);
PHP
public function index()
{
sleep(5);
return response(DemoStaff::all(), 200);
}
Thanks
Related
I want to build a extension that behaves like a timer. It should count down the seconds when activated, but should do nothing with inactive.
The chrome.alarms API is interesting, but does not have enough precision nor granularity. It only fires at most once per minute, and it may fire late. If I want something to execute more often than that, I can't use this API.
Then, the next natural solution is to use a background page and use setTimeout or setInterval in there. However, background pages are persistent, and they take up resources (e.g. memory) even when idle. So they are not ideal.
The best solution seems to be an event page to run the timer. However, the documentation says:
Once it has been loaded, the event page will stay running as long as it is active (for example, calling an extension API or issuing a network request).
[…]
Once the event page has been idle a short time (a few seconds), the runtime.onSuspend event is dispatched. The event page has a few more seconds to handle this event before it is forcibly unloaded.
[…]
If your extension uses window.setTimeout() or window.setInterval(), switch to using the alarms API instead. DOM-based timers won't be honored if the event page shuts down.
Unfortunately, having an active setInterval is not enough to consider an event page active. In fact, from my tests, an interval up to 10 seconds is short enough to keep the event page running, but anything greater than 10 or 15 seconds is too far apart and the event page will get unloaded. I've tested this on my crx-reload-tab project.
I believe what I want is a middle ground:
I want a background page that I can load and unload on demand. (Instead of one that keeps loaded all the time.)
I want an event page that stays persistent in memory for as long as I say; but otherwise could be unloaded. (Instead of one that gets unloaded automatically by the browser.)
Is it possible? How can I do it?
Background pages cannot be unloaded on demand, and Chrome decides Event page lifecycle for you (there is nothing you can do in onSuspend to prevent it).
If your concern is timers, you could try my solution from this answer, which basically splits a timer into shorter timers for a "sparse" busy-wait. That's enough to keep the event page loaded and is a viable solution if you don't need to do that frequently.
In general, there are some things that will keep an event page loaded:
If you're using message passing, be sure to close unused message ports. The event page will not shut down until all message ports are closed.
This can be exploited if you have any other context to keep an open Port to, for example a content script. See Long-lived connections docs for more details.
In practice, if you often or constantly need precise, sub-minute timers, an Event page is a bad solution. Your resource gains from using one might not justify it.
As mentioned in Xan's answer we can abuse messaging. There's nothing wrong about it either in case you want to temporarily prevent the event page from unloading. For example while displaying a progress meter using chrome.notifications API or any other activity based on setTimeout/setInterval that may exceed the default unload timeout which is 5-15 seconds.
Demo
It creates an iframe in the background page and the iframe connects to the background page. In addition to manifest.json and a background script you'll need to make two additional files bg-iframe.html and bg-iframe.js with the code specified below.
manifest.json excerpt:
"background": {
"scripts": ["bg.js"],
"persistent": false
}
bg.js:
function preventUnload() {
let iframe = document.querySelector('iframe');
if (!iframe) {
iframe = document.createElement('iframe');
document.body.appendChild(iframe).src = 'bg-iframe.html';
}
}
function allowUnload() {
let iframe = document.querySelector('iframe');
if (iframe) iframe.remove();
}
chrome.runtime.onConnect.addListener(() => {});
bg-iframe.html:
<script src="bg-iframe.js"></script>
bg-iframe.js:
chrome.runtime.connect();
Usage example in bg.js:
chrome.runtime.onMessage.addListener((message, sender, sendResponse) => {
if (message === 'start') doSomething();
});
function doSomething() {
preventUnload();
// do something asynchronous that's spread over time
// like for example consecutive setTimeout or setInterval calls
let ticks = 20;
const interval = setInterval(tick, 1000);
function tick() {
// do something
// ................
if (--ticks <= 0) done();
}
function done() {
clearInterval(interval);
allowUnload();
}
}
I use this function:
function _doNotSleep() {
if (isActive) {
setTimeout(() => {
fetch(chrome.runtime.getURL('manifest.json'));
_doNotSleep();
}, 2000);
}
}
But the problem with such approach is that Devtools network tab polluted with this http stub.
I'm new in Node JS and i wonder if under mentioned snippets of code has multisession problem.
Consider I have Node JS server (express) and I listen on some POST request:
app.post('/sync/:method', onPostRequest);
var onPostRequest = function(req,res){
// parse request and fetch email list
var emails = [....]; // pseudocode
doJob(emails);
res.status(200).end('OK');
}
function doJob(_emails){
try {
emailsFromFile = fs.readFileSync(FILE_PATH, "utf8") || {};
if(_.isString(oldEmails)){
emailsFromFile = JSON.parse(emailsFromFile);
}
_emails.forEach(function(_email){
if( !emailsFromFile[_email] ){
emailsFromFile[_email] = 0;
}
else{
emailsFromFile[_email] += 1;
}
});
// write object back
fs.writeFileSync(FILE_PATH, JSON.stringify(emailsFromFile));
} catch (e) {
console.error(e);
};
}
So doJob method receives _emails list and I update (counter +1) these emails from object emailsFromFile loaded from file.
Consider I got 2 requests at the same time and it triggers doJob twice. I afraid that when one request loaded emailsFromFile from file, the second request might change file content.
Can anybody spread the light on this issue?
Because the code in the doJob() function is all synchronous, there is no risk of multiple requests causing a concurrency problem.
If you were using async IO in that function, then there would be possible concurrency issues.
To explain, Javascript in node.js is single threaded. So, there is only one thread of Javascript execution running at a time and that thread of execution runs until it returns back to the event loop. So, any sequence of entirely synchronous code like you have in doJob() will run to completion without interruption.
If, on the other hand, you use any asynchronous operations such as fs.readFile() instead of fs.readFileSync(), then that thread of execution will return back to the event loop at the point you call fs.readFileSync() and another request can be run while it is reading the file. If that were the case, then you could end up with two requests conflicting over the same file. In that case, you would have to implement some form of concurrency protection (some sort of flag or queue). This is the type of thing that databases offer lots of features for.
I have a node.js app running on a Raspberry Pi that uses lots of async file I/O and I can have conflicts with that code from multiple requests. I solved it by setting a flag anytime I'm writing to a specific file and any other requests that want to write to that file first check that flag and if it is set, those requests going into my own queue are then served when the prior request finishes its write operation. There are many other ways to solve that too. If this happens in a lot of places, then it's probably worth just getting a database that offers features for this type of write contention.
I dont know how to ask my question correctly, but for example I have some structure like this
get_data:function(){
this.unblock();
request("example.com", Meteor.bindEnvironment(function(error, response, body) {
if (!error && response.statusCode == 200) {
$ = Cheerio.load(body);// get HTML of example.com
$(".someclass").each(function() {
if (!somedata_doesnt_exist_in_Mongo) {
request(nexturl, Meteor.bindEnvironment(function(error, response, body)
//... logic
}));
}
});
}
}))
}
Main idea is that I get data from many sites like agregator and have a lot of methods like this. And it'a a lot of time. So I have 2 questions
1 - for Meteor guys. When I use this.unblock() this ensures that my method will work without taking time with customers, like work in other thread ?
2 - How can I optimaze code stucture like above ?
Sorry if it's not in StackOverflow format but
I am waiting for any help !
this.unblock is relevant only to each client individually. It
allows subsequent method calls from client A to run without
having the previous method calls from that client A to finish.
It is like working in a new thread asynchronously in the sense that
the previous method calls are not blocking for client A for this
function using this.unblock. If you have client B, his/her
method invocation wouldn't be blocking A's regardless of whether
you use this.unblock.
I recommend using this.unblock whenever you are sure subsequent method calls will not rely on the result of the function you use this.unblock in. Sending out emails is the most common example. Subsequent method calls will not need the emails to finish sending before doing its job. For your example, I think it should be good to use this.unblock, but of course it depends on what you plan to do with the results following the execution of code after this.unblock.
I am currently trying to implement following code with Zombie.js. Yet, I am unable to make the following code to work:
var Browser = require('zombie');
browser = new Browser();
browser.wait(3000, function() { console.log("ok"); });
So, the script should wait 3 seconds before displaying "ok". Yet, it displays it immediately.
Am I misunderstanding something?
Thanks for your help!
As the documentation states:
Waits for the browser to complete loading resources and processing
JavaScript events.
Since you're not requesting anything, there's nothing to wait for, so Zombie calls the callback immediately. It's more of a maximum timeout kind of thing, not a guaranteed wait.
I'm writing a Node.js application using a global event emitter. In other words, my application is built entirely around events. I find this kind of architecture working extremely well for me, with the exception of one side case which I will describe here.
Note that I do not think knowledge of Node.js is required to answer this question. Therefore I will try to keep it abstract.
Imagine the following situation:
A global event emitter (called mediator) allows individual modules to listen for application-wide events.
A HTTP Server is created, accepting incoming requests.
For each incoming request, an event emitter is created to deal with events specific to this request
An example (purely to illustrate this question) of an incoming request:
mediator.on('http.request', request, response, emitter) {
//deal with the new request here, e.g.:
response.send("Hello World.");
});
So far, so good. One can now extend this application by identifying the requested URL and emitting appropriate events:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
emitter.emit('root');
}
else {
emitter.emit('404');
}
});
Following this one can write a module that will deal with a root request.
mediator.on('http.request', function(request, response, emitter) {
//when root is requested
emitter.once('root', function() {
response.send('Welcome to the frontpage.');
});
});
Seems fine, right? Actually, it is potentially broken code. The reason is that the line emitter.emit('root') may be executed before the line emitter.once('root', ...). The result is that the listener never gets executed.
One could deal with this specific situation by delaying the emission of the root event to the end of the event loop:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
process.nextTick(function() {
emitter.emit('root');
});
}
else {
process.nextTick(function() {
emitter.emit('404');
});
}
});
The reason this works is because the emission is now delayed until the current event loop has finished, and therefore all listeners have been registered.
However, there are many issues with this approach:
one of the advantages of such event based architecture is that emitting modules do not need to know who is listening to their events. Therefore it should not be necessary to decide whether the event emission needs to be delayed, because one cannot know what is going to listen for the event and if it needs it to be delayed or not.
it significantly clutters and complexifies code (compare the two examples)
it probably worsens performance
As a consequence, my question is: how does one avoid the need to delay event emission to the next tick of the event loop, such as in the described situation?
Update 19-01-2013
An example illustrating why this behavior is useful: to allow a http request to be handled in parallel.
mediator.on('http.request', function(req, res) {
req.onceall('json.parsed', 'validated', 'methodoverridden', 'authenticated', function() {
//the request has now been validated, parsed as JSON, the kind of HTTP method has been overridden when requested to and it has been authenticated
});
});
If each event like json.parsed would emit the original request, then the above is not possible because each event is related to another request and you cannot listen for a combination of actions executed in parallel for a specific request.
Having both a mediator which listens for events and an emitter which also listens and triggers events seems overly complicated. I'm sure there is a legit reason but my suggestion is to simplify. We use a global eventBus in our nodejs service that does something similar. For this situation, I would emit a new event.
bus.on('http:request', function(req, res) {
if (req.url === '/')
bus.emit('ns:root', req, res);
else
bus.emit('404');
});
// note the use of namespace here to target specific subsystem
bus.once('ns:root', function(req, res) {
res.send('Welcome to the frontpage.');
});
It sounds like you're starting to run into some of the disadvantages of the observer pattern (as mentioned in many books/articles that describe this pattern). My solution is not ideal – assuming an ideal one exists – but:
If you can make a simplifying assumption that the event is emitted only 1 time per emitter (i.e. emitter.emit('root'); is called only once for any emitter instance), then perhaps you can write something that works like jQuery's $.ready() event.
In that case, subscribing to emitter.once('root', function() { ... }) will check whether 'root' was emitted already, and if so, will invoke the handler anyway. And if 'root' was not emitted yet, it'll defer to the normal, existing functionality.
That's all I got.
I think this architecture is in trouble, as you're doing sequential work (I/O) that requires definite order of actions but still plan to build app on components that naturally allow non-deterministic order of execution.
What you can do
Include context selector in mediator.on function e.g. in this way
mediator.on('http.request > root', function( .. ) { } )
Or define it as submediator
var submediator = mediator.yield('http.request > root');
submediator.on(function( ... ) {
emitter.once('root', ... )
});
This would trigger the callback only if root was emitted from http.request handler.
Another trickier way is to make background ordering, but it's not feasible with your current one mediator rules them all interface. Implement code so, that each .emit call does not actually send the event, but puts the produced event in list. Each .once puts consume event record in the same list. When all mediator.on callbacks have been executed, walk through the list, sort it by dependency order (e.g. if list has first consume 'root' and then produce 'root' swap them). Then execute consume handlers in order. If you run out of events, stop executing.
Oi, this seems like a very broken architecture for a few reasons:
How do you pass around request and response? It looks like you've got global references to them.
If I answer your question, you will turn your server into a pure synchronous function and you'd lose the power of async node.js. (Requests would be queued effectively, and could only start executing once the last request is 100% finished.)
To fix this:
Pass request & response to the emit() call as parameters. Now you don't need to force everything to run synchronously anymore, because when the next component handles the event, it will have a reference to the right request & response objects.
Learn about other common solutions that don't need a global mediator. Look at the pattern that Connect was based on many Internet-years ago: http://howtonode.org/connect-it <- describes middleware/onion routing