I haven't found out anything about my problem so I'd like to ask you if following problem could be solved. I have a nodejs server which displays a website with a button. Is it possible to start another node server (which should do some spookyJS tests and print the results to the website) when i click this button?
I found out that with nowJS you have a shared space which the server and "client" (some html page) share. Is this module helpful?
Thanks for your help,
Alex
In short - Yes!
But perhaps you can have both web servers running at all times. In fact, it'll be less of a load on your hardware.
1st Server - Application Server - runs at yoursite.com
2nd Server - SpookyJs/Test Server - runs at tests.yoursite.com
After the servers are up and running the next thing I'd do is wrap the SpookyJs application with a simple restful interface/api. To start tests and to respond with the result of a test.
An important thing to note here is that when you start the SpookyJS application, let stay open. So that every request to the SpookyJS application (through your interface) calls the "open" or the "then" method.
Again, this is to remedy the issue of spawning too many headless browsers.
After the request goes through, go ahead and respond to the request with the result that spooky gives you.
Maybe that helps?
We are doing similar things with Zombie js... so maybe it will help you (:
Related
Can someone brief me on what happens if I didn't use any webserver(NGINX) in front of my Application server (uWSGI or GUNICORN)?
My requirement is exposing a simple python script as a web-service. I don't have any static content to render. In that scenario can I go without NGINX?
Brief me what are the issues I will face if I go with plain app server? Max requests per second would be some 50 to 80(This is the upper limit).
Thanks, Vijay
If your script acts like a webserver then it is a webserver and you don't need any layer on top of it.
You have to make sure though it acts like one:
listens for connections
handles them concurrently
wakes up upon server restart, etc…
Also:
handles internal connections correctly (eg. to the database)
doesn't leak memory
doesn't die upon an exception
Having a http server in front of a script has one great benefit: the script executes and simply dies. No problem with memory handling and so on… imagine your script becomes unresponsive, ask yourself what then…
I'm struggling with a technical issue, and because of I'm pretty new on NodeJS world I think I don't have the proper good practise and tools to help me solve this.
Using the well known request module, I'm making a stream proxy from a remote server to the client. Almost everything is fine and working properly until a certain point, if there is too much requests at the same time the server does no longer respond. Actualy it does get the client request but is unable to go through the stream process and serve the content.
What I'm currently doing:
Creating a server with http module with http.createServer
Getting remote url from a php script using exec
Instanciate the stream
How I did it:
http://pastebin.com/a2ZX5nRr
I tried to investigate on the pooling stuff and did not understand everything, same thing the pool maxSocket was recently added, but did not helped me. I was also seting before the http.globalAgent to infinity, but I read that this was no longer limited in nodeJS from a while, so it does not help.
See here: https://nodejs.org/api/http.html#http_http_globalagent
I also read this: Nodejs Max Socket Pooling Settings but I'm wondering what is the difference between a custom agent and the global one.
I believed that it could come from the server but I tested it on a very small one and a bigger one and it was not coming from there. I think it definitely coming from my app that has to be better designed. Indeed each time I'm restarting the app instance it works again. Also if I'm starting a fork of the server meanwhile the other is not serving anything on another port it will work. So it might not be about ressources.
Do you have any clue, tools or something that may help me to understand and debug what is going on?
NPM Module that can help handle stream properly:
https://www.npmjs.com/package/pump
I made few tests, and I think I've found what I was looking for. The unpipe things more info here:
https://nodejs.org/api/stream.html#stream_readable_unpipe_destination
Can see and read this too, it leads me to understand few things about pipe remaining open when target failed or something:
http://www.bennadel.com/blog/2679-how-error-events-affect-piped-streams-in-node-js.htm
So what I've done, i'm currently unpiping pipes when stream's end event is fired. However I guess you can make this in different ways, it depends on how you want to handle the thing but you may unpipe also on error from source/target.
Edit: I still have issues, it seams that the stream is now unpiping when it does not have too. I'll have to doubile check this.
Consider this scenario:
Socket.io app went down (or restarted) for some reason and took about 2 seconds before it started again (considering the use of production manager app ie: PM2).
Within the 3 second down time a client tried to request the client socket.io.js script (localhost:xxxx/socket.io/socket.io.js) and resulted as a failed request (error 500, 404, or net::ERR_CONNECTION_REFUSED) before the server got started again.
After the three second downtime the server file is available again.
So now i have no other way but to inform the user to refresh to resume real time transactions.
I cannot retry to reconnect to the socket.io server because i do not have the client script.
But if it is served somewhere else, perhaps at the same dir where jQuery is, i could just listen if io is available again by writing a simple retry function that fires for every few seconds.
In general, it's a good idea to use the version served by Socket.IO, as you'll have guaranteed compatibility. However, as long as you stay on top of making sure you deploy the right versions, it's perfectly fine to host that file somewhere else. In fact, it's even preferred since you're taking the static load off your application servers and putting it elsewhere.
An easy way to do what you want is to configure Nginx or similar to cache that file and serve a stale copy when the upstream server (your Node.js with Socket.IO server) is down. https://serverfault.com/q/357541/52951
I'm developing a node web application. And, while testing around, one of the client chrome browser went into hung state. The browser entered into an infinite loop where it was continuously downloading all the JavaScript files referenced by the html page. I rebooted the webserver (node.js), but once the webserver came back online, it continued receiving tons of request per second from the same browser in question.
Obviously, I went ahead and terminated the client browser so that the issue went away.
But, I'm concerned, once my web application go live/public, how to handle such problem-client-connections from the server side. Since I will have no access to the clients.
Is there anything (an npm module/code?), that can make best guess to handle/detect such bad client connections from within my webserver code. And once detected, ignore any future requests from that particular client instance. I understand handling within the Node server might not be the best approach. But, at least I can save my cpu/network by not rendering to the bad requests.
P.S.
Btw, I'm planning to deploy my node web application onto Heroku with a small budget. So, if you know of any firewall/configuration that could handle the above scenario please do recommend.
I think it's important to know that this is a pretty rare case. If your application has a very large user base or there is some other reason you are concerned with DOS/DDOS related attacks, it looks like Heroku provides some DDOS security for you. If you have your own server, I would suggest looking into Nginx or HAProxy as load balancers for your app combined with fail2ban. See this tutorial.
User case:
My nodejs server start with a configuration wizard that allow user to change the port and scheme. Even more, update the express routes
Question:
Is it possible to apply the such kind of configuration changes on the fly? restart the server can definitely bring all the changes online but i'm not sure how to trigger it from code.
Changing core configuration on the fly is rarely practiced. Node.js and most http frameworks do not support it neither at this point.
Modifying configuration and then restarting the server is completley valid solution and I suggest you to use it.
To restart server programatically you have to execute logics outside of the node.js, so that this process can continue once node.js process is killed. Granted you are running node.js server on Linux, the Bash script sounds like the best tool available for you.
Implementation will look something like this:
Client presses a switch somewhere on your site powered by node.js
Node.js then executes some JavaScript code which instructs your OS to execute some bash script, lets say it is script.sh
script.sh restarts node.js
Done
If any of the steps is difficult, ask about it. Though step 1 is something you are likely handling yourself already.
I know this question was asked a long time ago but since I ran into this problem I will share what I ended up doing.
For my problem I needed to restart the server since the user is allowed to change the port on their website. What I ended up doing is wrapping the whole server creation (https.createServer/server.listen) into a function called startServer(port). I would call this function at the end of the file with a default port. The user would change port by accessing endpoint /changePort?port=3000. That endpoint would call another function called restartServer(server,res,port) which would then call the startServer(port) with the new port then redirect user to that new site with the new port.
Much better than restarting the whole nodejs process.