How to implement a hybrid rendering using D3D11? suppose there are many draw calls for rendering one frame. Can we render some functions using CPU and other functions using GPU? thanks~
Related
I have got a use case in which I have to do image search in node JS application. I have image hash stored in database, so I need to calculate hash in nodeJS application. I may have more computer vision use cases in future. I came across nodejs-opencv module, but none of the blogs talk about its performance. As nodeJS is not supposed to be used for CPU intensive tasks and computer vision algorithms may need heavy processing I am not sure how well it will scale. Has someone used it in production and provide some details ?
Which nodejs OpenCV library are you talking about? If you pick one that provides javascript bindings from a native node module then you will have the same performance as with C++. I am currently working on an OpenCV 3.x native node addon to use OpenCV with javascript and can tell you that there is really no difference in performance. However there are also implementations in pure javascript without the need of setting up the OpenCV c++ library on your server or local system but most of the CV tasks will be significantly slower if implemented in javascript as you already guessed.
The most performant way to use computer vision in NodeJS project is move the calculations into C++ codespace (native addon) and apply Workers for multithreading
I would like to draw 100-200k features/icons/markers to be used with openlayers v3.
Based on readings, the approach would be either to dp clustering, or let the server handle the drawings, in other words, use layer for that. This applies for google maps api.
So, i plan to use node.js (or php) for this task, but did not manage to find a proper module for node.js to do this.
Please recommend a module for this.
Or at least, how can i draw a png on node.
Specifically, I would like to create a layer where a request to my server, e.g: map.my.com/x/y/z.png would draw the markers on server (or predrawn), and not draw them on client side.
I have a JavaFX application that contains a WebView among other JavaFx views in the same window. The WebView opens a URL to a NodeJs webapp that consumes a lot of CPU resources.
With this resource consumption from the WebView, the other JavaFX views are working slowly.
For our application, we have a very powerful system with 12 virtual threads in the processor.
So, what I need is to deport the WebView processing to another thread so that it won't affect the behavior of the other JavaFX views. Is there any way to achieve this?
You cannot do this. JavaFX has a single application thread per JVM process (java invocation) and WebView API calls must be processed on the JavaFX application thread.
Note that internally, the WebView uses WebKit which may have its own threading implementation to support html5 stuff like Web Workers, but that is all hidden from you when you are programming WebView at the Java API level. So that won't make much of a difference unless you explicitly program your JavaScript to make use of it. I guess if your NodeJS webapp is optimized for other browsers it will probably work fine in WebView, you will just need to benchmark, which I guess you did and found something wanting. You may want to expend some effort optimizing the NodeJS webapp that you have.
The only way to achieve something similar to what you are requesting in your question is to launch a separate process, i.e., a new JavaFX application in a new JVM process, that contains your WebView instance.
We have an app that uses server-side rendering for SEO purposes using EJS templating.
I am well-versed with Node.js and know that it's probably possible to tap into the Node.js threadpool for asynchronous I/O for whatever purpose you want, whether it's a good idea or a bad idea. Currently I am wondering if it is possible to run ejs.render() or res.render() with a thread in the threadpool instead of the main thread in Node.js?
We are doing a lot of heavy computational lifting in the render functions and we definitely want that off the main thread, otherwise we will be paying $$$ for more servers.
Is it just the rendering that is concerning you? There are other template engines which should produce better results; being that template rendering should be an idempotent operation, you could additionally distribute across a cluster.
V8 will compile your code to assembly and, if your not hitting any deoptimizations or getting stalled by the garbage collector, I believe you should be in the neighborhood of your network I/O limits. I would definitely recommend you try other template engines, adding a caching HTTP reverse proxy at the front and running some benchmarks first.
EJS is known to be synchronous, and that's not going to change, so basically it's an inefficient rendering engine for Node.js since it blocks the JS thread whenever it renders a view, which degrades your overall throughput, especially if your rendering is CPU heavy.
You should definitely think about some other options. E.g. https://github.com/ericf/express-handlebars
If you really have CPU-heavy computation in your webserver, then Node.js is definitely not the right tool for the job anyway. There are much better servers to handle multi-threading and parallel processing. You could just setup Node to be a controller and forward your CPU-heavy requests to a backend service/server that can do the heavy-lifting.
It would be helpful to see what kind of computation you are doing during render to provide a better answer.
Tapping into the thread-pool (which is handled by libuv) would probably be a bad idea, but it is possible of course.. you just need some C++ skills and the uv_queue_work() method of the libuv library to schedule stuff on a worker thread.
I have experimented with building a scripting engine that is run in a forked process (Read on node's child process module here). I find that to be an attractive proposition for implementing rendering engines. Yes there are issues of passing parameters (post/get query strings, session status, etc) but they are easy to deal with, especially if you use the fork option (as opposed to exec or spawn). There are standard messaging methods to communicate between the child and parent.
The only overhead is the generation of the additional instance of node (the rendering engine itself). If you are doing extensive computation in the scripting engine then this constant, the one-time per rendering request overhead of forking a new process will be minor compared to the time taken to render.
If EJS rendering blocks the main node thread, then that alone is sufficient reason NOT to use it if you are doing any significant computation during rendering.
I've been a ruby/php web application developer for quite some time and I'm used to the idea of horizontal scaling of server instances to handle more requests. Horizontal scaling - meaning separate instances of an application sitting behind a load-balancer that share nothing and are unaware of each other.
The main question I have is, since Node.js and it's emphasis on evented-io allows for a single box running a node.js server to handle 'thousands' of simultaneous requests - is load-balancing/horizontal scaling used to scale nodejs applications? Is scaling a node app limited to vertical scaling (throwing more RAM/Processing power at the problem)?
My second question has to do with node.js horizontal scaling and websockets. I've seen quite a few Node.js 'chat' tutorials out there that make use of websockets.
(favorite: http://martinsikora.com/nodejs-and-websocket-simple-chat-tutorial)
Since websockets effectively keep an open line of communication open between a browser and a server, would a horizontally scaled architecture typical of the PHP/Ruby world cause a chat application like the one explained in the link to break - as new websocket connection requests would be assigned to different processes/servers and there would be no one central resource tracking all connected clients?
Node.js supports horizontal scaling in much the way you describe via the built-in cluster module.
Regarding your second question about the use of websockets/socket.io in this environment, you have to use something like Redis to store shared state across multiple instances of your application as described here.
Node.js's cluster functionality is limited to single server with multiple processor. Mainly it leverages number of processors in server. I think the question if more about the scenario when we want to scale horizontally with multiple servers with a Load balancer facade.
If you have node.js instances spread across multi servers(horizontal scaling), it will serve the same purpose, you need to program it properly to support this type of setup.