I am trying to throttle hosted node.js applications. Those applications are user created in a web-ide and it seems, it can knock out the entire server.
Do we need to apply this in C++ and rebuild node.js by self ?
If you are using linux, you can try something like "renice" to set the priority of each of these processes. Node.js is no different than hosting python, perl or PHP applications, any of them can take a lot of CPU if the program is written poorly or the application is processing many requests.
If by "knock out the entire server" you mean can cause a kernel panic, make sure you have the latest version of node.js and your server is up to date. This should never happen.
Related
I have created a node server using express which run python program which run on gpu in background to process the inputs/request. I am able to display the output on browser. How can I make my node server salable to handle many concurrent. Should I use queue for handling requests? If yes how do I start? I need a definite approach.
Node.js by default is setup to be highly performant due to it's asynchronous event loop.
However, for multi-core systems, you could look at the cluster module to scale out your api: https://nodejs.org/api/cluster.html
Note that your Node.js backend should ideally be stateless for scalability.
I'm running electron on linux server for web scraping. And currently I'm running new electron command for each task. But it results in high cpu usage. Now thinking about running single electron instance, and create new BrowserWindow for each task. It will take some time to adapt the code base for this style, so I wanted to ask here first. Will it make a difference in cpu usage, and how much?
Basically, creating a new NodeJS process will result in re-parsing your application's code, which will highly affect your CPU usage. Creating only a new BrowserWindow will only create a new renderer process, which is way more efficient.
If your application is packaged, e.g. with electron-packager, then creating a new instance will also affect your CPU usage like creating another NodeJS process, because that packaged (aka compiled) application has a copy of NodeJS in it, which is enough to run your code, but still affects the CPU usage.
But the decision depends on how you use the server. If you only run the Electron application to carry out the tasks that have been defined by you, adapting your working code would have no to only a low benefit. If you want to release this application and/or that server is used by some other tasks, e.g. a web server, it would be a real benefit if you adapt your code.
Running multiple instances of the main nodejs process with the default configuration is not actually supported or tested. You'll find that any features that persists data to disk either don't work, or don't work as expected (ie. localstorage, indexeddb, sessions, etc).
https://github.com/electron/electron/issues/2493
You can work around this by changing the data directory for each instance so they don't trample over each other but this is likely to use a lot of disk space and you'd need a way to keep track of all these data directories.
A single main process with multiple renderers is nearly always the answer.
I have a simple node.js server app built that I'm hoping to test out soon. It's single threaded and works fine without any child processing whatsoever. My problem is that the server box has multiple cores and the simplest way I can think to utilize them is by running multiple instances of the server app. However this would require them all to be on the same domain name and so some sort of request routing is required. I personally don't have much experience with servers in general and don't know if this is a task for node.js to perform or some other less complicated program (or more complicated.) If there is a node.js mechanism to solve this, for example, if one running instance can send incoming requests to the next instance, than how would I detect when this needs to happen? Transversely, if I use some other program how will it manage to detect when it needs to start talking to a new instance?
Node.js includes built-in support for managing a cluster of instances of your application to take advantage of multiple cores via the cluster module.
Is there anything that is similar to PHP's APC (Alternative PHP Cache) for Node.js?
So every Node.js thread running on a server can access the cache. I know the architecture of Node.js may not easily or at all allow for an APC like cache.
I know we can of course run memcache on each server as well to create a server level cache but was curious of there was any alternative.
thanks
Node is trying to keep only the basic stuff in its API, so you won't find such a thing "baked in" (for example WebSockets isn't included in Node core, but in external modules).
You would need to create such a cache layer using something like Redis or Memcached.
P.S. You should better refer to Node processes instead of threads, since you don't have to handle threading stuff with Node.
I don't know if this module helps at all.
I can't guarantee its' reliability and I never kept my promise to do a Windows API as I'm a bit of a linux snob (as in nothing Microsoft comes near my PC)
https://github.com/dazhazit/node-ipcbuffer
It implements a simple byte buffer between processes. You could probably build any mechanism you like on top of it.
I have an application which was ported from Windows to Linux. Now the same code compiles on VS C++ and g++, but there is a difference in performance when it's running on Win and when it's running on Linux. The scope of this application is caching. It's a node between a server and a client, and it's caching client requests and server response in a list, so that any other client which makes requests that was already processed by the server, this node will response instead of forwarding it to server.
When this node runs on Windows, the client gets all it needs in about 7 seconds. But when same node is running on Linux (Ubuntu 9.04), the client starts up in 35 seconds. Every test is from scratch. I'm trying to understand why is this timing difference. A weird scenario is when the node is running on Linux but in a Virtual Machine, hosted by Win. In this case, load time is around 7 seconds, just like it was running Win natively. So, my impression is that there is a problem with networking.
This node is using UDP protocol for sending and receiving network data, and it's using boost::asio as implementation. I tried to change all supported socket flags, changed buffer size, but nothing.
Does someone know why is this happening, or any network settings related with UDP that might influence the performance?
Thanks.
If you suspect a network problem take a network capture (Wireshark is great for this kind of problem) and look at the traffic.
Find out where the time is being spent, either based on the network capture or based on the output of a profiler.
Once you know that you're half way to a solution.
These timing differences can depend on many factors, but the first one coming to mind is that you are using a modern Windows version. XP already had features to keep recently used applications in memory, but in Vista this was much better optimized. For each application you load, a special load file is created that is equal to how it looks in memory. Next time you load your application, it should go a lot faster.
I don't know about Linux, but it is very well possible that it needs to load your app completely each time. You can test the difference in performance between the two systems much better if you compare performance when running. Leave your application open (if it is possible with your design) and compare again.
These differences in how the system optimizes memory are backed up by your scenario using the VM approach.
Basically, if you rule out other running applications and if you run your application in high priority mode, the performance should be close to equal, but it depends on whether you use operating system specific code, how you access the file system, how you you use the UDP protocol etc etc.