Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Dont get me wrong people, let me clarify.
I would like to ask if I can trust node.js. I know its an amazing tool. But its a really young platform, to be honest. Should I start playing around with it (production, not just experimental use), or should I wait till it "grows up"?
Does it work fine on Windows? Because at the beginig it was not supported. Are there any stress tests that actually prove that its safe and can be trusted?
It demands to write a lot of code by hand, stuff that in other platforms are done by just one line of code. I know you are gonna say to me "that depends on your experience" . I agree, but does it worth "learning" node? What if its developing stops? Again, I'm only asking because its pretty young.
What of node's add-ons and modules are to be trusted about their safety/stability? There are so many out there.
Is it stable? And finally, what about node's interoperability? Does it work on every platform/browser? What about smartphones and mobile devices?
Again, dont get me wrong, I am not critisizing. I am just concerned because its pretty new, everybody is excited and I haven't see any cons, or safety/stability issues around.
Thanks
I don't understand why would anyone choose to use node.js to do backend: the statically typed code is easier to maintain and Javascript is not the best (a good?) language.
That said, there are situations, where it makes a lot of sense to have the same code running in the browser and in the back end. When you run into one of these, you will know. And then Node works just fine. We've had it in production for months exposing its functionality as an internal web service to our back end application and haven't had any problems with it.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
Is it a wrong manner to start writing a NodeJS application using REST architecture, then try to use GraphQL or gRPC approaches in some parts or completely rewrite some or whole the application in the future?
The reason for doing this is previous experience and coding speed in REST APIs. But at the other hand it's kinda a big microservices project and should support millions of users.
GraphQL is not going to help you scale, quite the opposite in many cases. GraphQL is an optimization (in some cases) but mostly for developer productivity, but there is a complexity cost.
Generally I would suggest to steer away from this optimization unless you have a clear understanding of what you're solving for. REST is a good 'default choice' because it's well understood, requires little tooling and is pretty universal.
Once you are further into your project and you find that you have (ideally measurable) challenges, you're in a much better place to decide to use a more specialized paradigm (gRPC/graphql) and why, but it doesn't sound like you're there yet.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've got a web app where I use plain file system for my custom logs - a lot of small files, I don't want to put that into db, that works for me quite well. But now I need to scale my app by using a load balancer in front, so I also need to keep those logs in sync between servers. Is there any reliable solution for such cases ? I know I could sync it by some OS means, or by scripting, but I'm thinking if there is any better solution for such scenarios? Is it the case for MongoDB usage or something more modern or is it better to keep it on file system as plain files ?
This questions is going to get you some heat since essentially your asking for our opinion. Ill be frank tho and wont argue with anyone since its just MY opinion. With web apps in my humble opinion, its always better to keep your data in a DB for scalability but also for analytical research. I know little about what your app does but its easier to write third party data apps that tell you how many of X or Y etc when its centrally stored in a DB. Since the app that gets said data can be anywhere. I know I probably wasted time with an argument but hey, hope I helped a bit.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
At the risk of this question being closed I will ask anyway.
I have been looking at the different JavaScript Frameworks as most jobs roles seem to want:
angular.js
Knockout.js
Node.js
Whilst i can see Angualr.js and Knockout.js provides a MVC construct to the markup pages (though still not sure which one is best to use) I cannot see what is the case for node.js?
Whilst I appreciate node.js is good for real-time comms but so is Signalr as they both use long-polling.
At present I use signalr to update images on my clients.
is there any purpose to swapping this out for node.js?
Like I said this question could be voted to be closed as it may seem to be asking an opinion - and that would be an answer to me in itself as it would be down to developer choice but is there a DEFINITIVE reason to use node.js over signalr?
thanks
One reason to use node.js is code redundancy. Both the server and client run the same language, thus they may share a certain part of the codebase, meaning potentially less to write. With libraries like Browserify this process can be made a lot more transparent and writing the client-side can be almost indistinguishable from server-side development. Another opportunity this opens up is both client and server side rendering + MVC setups with, for example, rendr.js. So you can have both the fast load speeds of server-side and responsiveness of client-side rendering. If any of this will be useful naturally depends on what you are developing.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We are using this server for almost a year now.
Last forum post seen in November, 2011.
Last server version released 28/03/12.
Just wondering if anyone knows whats happening inside the company?
Should we expect something or should we start looking for alternatives?
I did what you did not do: using email to ask the question to the people able to answer.
And they replied that:
the forum was closed because they could not cope with the amount of accounts created daily to publish junk
the next version will be the most important ever made for G-Wan, with new features like a caching reverse proxy and an elastic load-balancer as well as system replacements like a wait-free memory allocator.
With regard to such developments, a 3-month period without publishing releases sounds reasonable.
More reasonable than assuming that such an 'inactivity period' means that "the project is dead".
Would you say that for other Web servers like Apache which have much larger release cycles?
You should always be expecting something from G-WAN. It's a great piece of software. Here's the other thing too: G-WAN was expertly engineered. That doesn't mean that there are no bugs in it, or that features can't be implemented, but G-WAN is incredibly tight.
It has lean code, it does what it supposed to do, very well, and it is built for the developer to add in the functionality that hasn't been put in there yet.
That's the beauty of it, or one facet of the beauty.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm developing a game with online mode, but it's opensource (SourceForge) and anyone can download the code, hack any checks and play against the official server with a hacked client.
I've been thinking about EXE file md5 checking, but anyone can calculate the genuine md5sum and send it to the server, bypassing that runtime check.
Is there any method to assure that the client is not modified? I know I must use server side checks because everything can be hacked. Other option is not committing some little part of the code and release EXE files compiled only in my computer, having all the files, but that goes against SourceForge rules I think.
As you stated, you need to check everything on the server.
Regardless of whether you release source code (remember Reflector!), you must never trust the client for anything (including its own integrity).
Note, however, that (ideally) you don't need to make cheating impossible; you just need to make it harder to accomplish a task by cheating than it is to accomplish that task legitimately.
Rational people will not cheat to accomplish something if they can do it more easily without cheating.
However, some people will cheat for the challenge of the hack, even if it's harder than doing it normally.