My company has it's own Apache Tomcat server, running MySQL and PHP websites on it.
Can I install Node.js and MongoDB in there without affecting other projects?
Yes you can install . But before doing that you have to take care of this thing.
How much physical memory you have ?
How much amount of RAM is average free for use ?
How much CPU core you have free to use ?
How much CPU utilization currently happening?
Then you can look to node.js howmuch CPU core you need for node.js .
If you have simple app then 1 core . And check same for mongodb . How much data you need to same .
No one can tell without knowing this parameter . You have to look on this parameter and judge if you are good to go .
I would personally recommend that your company, starts using virtualization technology. e.g Hyper-V or VMware-esxi. Then you can have your primary production server running with all of your client's websites and/or applications on one Virtual Machine/Appliance. Then make a copy of that server on to a secondary VM. Now you can safely test new software and/or projects. Inclusively with VMs you can enable full system backups easily making any major system failure easily recoverable.
Related
I'm a teacher in a training center for web development. We're teaching PHP and Node.js for the backend. In this context, it's very cool to allow our students to deploy small web servers. Unfortunately paying for one VPS per student ain't cheap, and free web hosting solutions are usually too limited.
That's why we go for the shared hosting route with unused computers, Raspberry-pis or small VPSs and create an account for each student.
With PHP it's easy. People do a shared hosting with PHP since decades and there's basically a complete feature in Apache to do that super easily (per-user web directories). We just add some shared database, script the initialization of users and we're ready to go.
For node.js... it's another story. No one seems to care about shared hosting in that community and everyone just pops a new VPS for each application or make a manual configuration with root access on a custom server.
To allow somewhat secure solution for automatic shared hosting with Node.js I would need some kind of application server that could:
Read through user directories (or multiple directories based on a pattern) for some source JavaScript files to execute.
Launch different applications with different users (for security purposes).
Kill and restart applications depending on incoming requests and usage of RAM (you can't launch simultaneously 30 node.js apps that consume 30 Mb of RAM minimum on a VPS with 512Mb or Ram, no you can't)
Monitor the node.js applications in order to avoid crashes if one of them does something bad (purposely or not, we're talking about web dev students... :) )
Theorically I know some web servers that could potentially be configured to do that (uWsgi and Passenger are the first that come to my mind). But I fear I could take multiple hours or days trying to alter their default behavior before realizing that I just set up a crappy solution that will crash after two days in production.
So... does anyone has some kind of solution for that use case? I'm open to anything, even Docker-based solutions. Just remember the three magic words: security, security and security. We just can't allow anyone to have root accounts on a server we own or to make it crash by too much consuming RAM or CPU.
Thanks in advance for your answers.
I am running a node app on a Digital Ocean cloud server, and the app merely services API requests. All client-side assets are served by a CDN, and the DB is accessed remotely, rather than stored on the server instance itself.
I have the choice of a greater number of vCPUs or RAM. I have no idea what that means in any way, so any feedback is a great help.
A single node.js server will run your Javascript on only one CPU so it doesn't help your Javascript run any faster to have more CPUs unless you cluster your app and run multiple node.js processes sharing the load of your app or unless there are other processes on the same server that are being used by your server.
Having more RAM (memory) will only improve things if you actually need more RAM. That depends entirely upon what the memory usage profile is of your app and how much RAM you already have available. Probably, you would already know if you were running out of RAM because you either get drastic slow-down when the OS starts page swapping or your process crashes when out of memory.
So, in order to know which would benefit you more, you really need more data on how your existing app is performing (whether it is ever bog down with CPU intensive operations and how much RAM it uses compared to how much you have available). It is quite possible that neither will actually matter to you - it totally depends upon the usage profile or your server process.
If you have no more data than this and have to make a choice, choose the vCPUs because there are some circumstances where it might help you (and gives you the option to go to clustering in the future if needed) whereas adding more RAM when you aren't even using what you already have won't help you at all.
I am using pm2 in my app (ubuntu 14.04, 2cpu 4gb ram).
I want to load-test or stress test the load-balance between clusters (I hope i'm saying it right) for it's effectiveness, What is the best way to do so?
I am using latest node.js (0.12.7) and latest pm version, i have 2 clusters going for me, 1 for each cpu.
Now i need to check response time when my server is at it's limits, even to see when it crashes and why (It's a staging server so i don't mind)
I know 'siege', used it a bit, not the one i want, i want something that can push the server to it's limits...
Any suggestions?
You can try http://loadme.socialtalents.com
In case free tier will be not enough you can request quota increase.
I have big enterprise JAVA application , running on several machines under tomcat7.
There are different performance problems such slow response , server hangs etc.
I want to try to play with different params like maxThread , maxConnection ,acceptCount and so on .
But before change them, how can I check that I run out of connections for example and I need to increase it ? Or everything else , like acceptCount that should be increased ?
Typically, Apache Tomcat performance issues are with the underlying JavaVM configuration, in my experience those are mainly with the size of the permGen, and other memory settings. I have been able to troubleshoot quite a few of them using VisualVM, which visualizes a lot of the JVM memory ops. Would also highly recommend JMeter.
IMHO maxThread and other Tomcat-specific parameters have rarely been the source of application performance issues, but it's the JVM settings where most issues are.
Start with minimum of these settings:
-Xms1024M -Xmx2048m -XX:MaxPermSize=1024m
I would recommend to find the problem before starting "fixing" things.
There are several applications to monitor your servers and check where the problems are. You can try appdynamics, newrelic, ruxit, or any other application monitoring product. (Some have free version offers that comes handy)
Then you search for your bottlenecks, they can be anywhere, server, database, network, jvm, ... depending on your application and your architecture.
And once you find the problem, you can start fixing it.
Good luck!
Ok so I have an idea I want to peruse but before I do I need to understand a few things fully.
Firstly the way I think im going to go ahead with this system is to have 3 Server which are described below:
The First Server will be my web Front End, this is the server that will be listening for connection and responding to clients, this server will have 8 cores and 16GB Ram.
The Second Server will be the Database Server, pretty self explanatory really, connect to the host and set / get data.
The Third Server will be my storage server, this will be where downloadable files are stored.
My first questions is:
On my front end server, I have 8 cores, what's the best way to scale node so that the load is distributed across the cores?
My second question is:
Is there a system out there I can drop into my application framework that will allow me to talk to the other cores and pass messages around to save I/O.
and final question:
Is there any system I can use to help move the content from my storage server to the request on the front-end server with as little overhead as possible, speed is a concern here as we would have 500+ clients downloading and uploading concurrently at peak times.
I have finally convinced my employer that node.js is extremely fast and its the latest in programming technology, and we should invest in a platform for our Intranet system, but he has requested detailed documentation on how this could be scaled across the current hardware we have available.
On my front end server, I have 8
cores, what's the best way to scale
node so that the load is distributed
across the cores?
Try to look at node.js cluster module which is a multi-core server manager.
Firstly, I wouldn't describe the setup you propose as 'scaling', it's more like 'spreading'. You only have one app server serving the requests. If you add more app servers in the future, then you will have a scaling problem then.
I understand that node.js is single-threaded, which implies that it can only use a single core. Not my area of expertise on how to/if you can scale it, will leave that part to someone else.
I would suggest NFS mounting a directory on the storage server to the app server. NFS has relatively low overhead. Then you can access the files as if they were local.
Concerning your first question: use cluster (we already use it in a production system, works like a charm).
When it comes to worker messaging, i cannot really help you out. But your best bet is cluster too. Maybe there will be some functionality that provides "inter-core" messaging accross all cluster workers in the future (don't know the roadmap of cluster, but it seems like an idea).
For your third requirement, i'd use a low-overhead protocol like NFS or (if you can go really crazy when it comes to infrastructure) a high-speed SAN backend.
Another advice: use MongoDB as your database backend. You can start with low-end hardware and scale up your database instance with ease using MongoDB's sharding/replication set features (if that is some kind of requirement).