Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
When playing with NodeJS, I came up with this question, since one can now put some code either on client side or server side using even the same language.
E.g. For a small game app, I can put the computation on client side when interacting (via some onclick function); also I can initiate a server request and do the computation there.
With more investigation, the terminology for my question is client vs server side rendering. Now there's a lot of materials I can find.
It's basically a tradeoff, depends on the user case, server capacity, etc.
The best philosophy for deciding what is left to the client and what is left server is often to leave as much as possible up to the client. While this often does not apply to very complex applications, most applications can apply this with no negative effects.
The logic here is that 1 dedicated computer (the client) can handle its' individual needs (such as images, video, gameplay) much easier than 1 or a few servers can handle 1000's of client needs.
However, some things require an external application (the server). Good examples of these are sessions, leaderboards, user authentication, and social media integration.
The only downside is that it may increase your applications initial load time. For small applications this may only be milliseconds. For larger applications, that take more than 2-3 second to load, I would say add a loading bar.
Cheers
-Nick
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have an application that we implemented kind of a microservices type architecture. The application contains 6 services (6 Docker containers). I need to load test this application. As I don't have much experience in the testing field, I'm not sure which method to use.
Right now, I have used the Gatling Load testing application for the load test. Here, I record the testing script by start the recorder and wander around my application to record all routes. I have gone through most of the routes in that single recording in order to mimic a practical user. I thought, normally users use an application like this and I can load test with its 1000 times by editing the number of threads/users.
Later I read about API testing which we will focus on APIs. Loading each APIs with a heavy load. So, I'm confused that which testing method should I use? If we go for API testing, it will provide only how much we can scale for that particular API right? (Not sure)
Is there any issue with my method of load testing?
It depends entirely on what you hope to achieve...
If you're looking to validate that your entire application (code + production infrastructure) can handle a given load, then driving as though going through the full website is the right path.
However, if you're looking to see how a particular api scales or want to help developers explore the ramifications of changes, then you will probably want to just drive that API directly to avoid other limitations your system may have.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm making a website that will handle video upload and encoding. My idea was to have the main server handle both client requests and video processing. But from my understanding, video encoding is cpu intensive. So I'm not sure if its a good idea to have one server do all the work, or have a separate server to do processing stuff. I want to try to future proof myself a bit in case I ever get high volumes of traffic, thus adding more processing work for the server.
So my question, is it overkill these days to have a separate server for video encoding, or am I going about this all wrong?
Ps. I'm using nodejs.
It will be overkill for someone starting out. As you mentioned, you don't have an idea of how much volume of traffic to expect yet, and it's difficult to project growth of your web app since it might grow gradually or take off immediately and hammer your server.
I would approach this in such a way where I can separate and queue the video processing work away from the main website. This will allow you to scale the video processing portion of your app without requiring you to run the entire website on there.
With a type of queuing system, you can also manage the amount of video you're processing at any point in time. So if 1 server can handle 5 video processing requests at once, any new request would have to wait until it finishes the previous request etc. Almost a micro-service type architecture.
Hope this gives you some ideas.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've got a web app where I use plain file system for my custom logs - a lot of small files, I don't want to put that into db, that works for me quite well. But now I need to scale my app by using a load balancer in front, so I also need to keep those logs in sync between servers. Is there any reliable solution for such cases ? I know I could sync it by some OS means, or by scripting, but I'm thinking if there is any better solution for such scenarios? Is it the case for MongoDB usage or something more modern or is it better to keep it on file system as plain files ?
This questions is going to get you some heat since essentially your asking for our opinion. Ill be frank tho and wont argue with anyone since its just MY opinion. With web apps in my humble opinion, its always better to keep your data in a DB for scalability but also for analytical research. I know little about what your app does but its easier to write third party data apps that tell you how many of X or Y etc when its centrally stored in a DB. Since the app that gets said data can be anywhere. I know I probably wasted time with an argument but hey, hope I helped a bit.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
At the risk of this question being closed I will ask anyway.
I have been looking at the different JavaScript Frameworks as most jobs roles seem to want:
angular.js
Knockout.js
Node.js
Whilst i can see Angualr.js and Knockout.js provides a MVC construct to the markup pages (though still not sure which one is best to use) I cannot see what is the case for node.js?
Whilst I appreciate node.js is good for real-time comms but so is Signalr as they both use long-polling.
At present I use signalr to update images on my clients.
is there any purpose to swapping this out for node.js?
Like I said this question could be voted to be closed as it may seem to be asking an opinion - and that would be an answer to me in itself as it would be down to developer choice but is there a DEFINITIVE reason to use node.js over signalr?
thanks
One reason to use node.js is code redundancy. Both the server and client run the same language, thus they may share a certain part of the codebase, meaning potentially less to write. With libraries like Browserify this process can be made a lot more transparent and writing the client-side can be almost indistinguishable from server-side development. Another opportunity this opens up is both client and server side rendering + MVC setups with, for example, rendr.js. So you can have both the fast load speeds of server-side and responsiveness of client-side rendering. If any of this will be useful naturally depends on what you are developing.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
This is not a holy war question, I'm just asking what framework would be the best choice in terms of performance in my specific project.
I'm writing a REST API and choosing between Node.js and Sinatra. One method of the API will be used very frequently (± 100k requests per day).
This request is very simple: select one row from a database, make a few calculations, update one row in a database.
But, as I said, it will be called frequently and I need to choose a framework that will perform better in this case.
This is a simple app and in this case I don't care which framework is easier or "better", just interested in the performance. I already wrote a prototype in Sinatra, the whole app is less than 150 lines of code.
I read about Node.js, but never created a real app with it.
Will Node.js be a significantly better choice for this project in terms of performance and scalability?
100k requests a day is roughly a request per second assuming a flat distribution of requests during the day. Both solutions will probably serve that without a problem. You're probably falling into the premature optimisation trap.
That being said, Javascript, because of it's asynchronous nature is significantly better at high i/o than Ruby (Sinatra is just a simple web framework, Node is just how you run Javascript on a server).
Now as per the "what should I do", I suspect most people would tell you to use the prototype you already have working and use it until it's no longer good, if it ever comes to it. Seeing it's such a small app it shouldn't be a problem to rewrite it later with Node anyway!