Nginx country based routing with socket.io - node.js

I am trying to run a node.js - mongo db application with nginx as a reverse proxy on digital ocean and mlab.
My website will be used from usa, india, uk and some asian countries potentially.
I have created my droplet on digital ocean in bangalore, India site. Config - ubuntu 14x, 2GB Ram, 40 GB disk.
I was very surprised to notice that the performance of the site when accessed from USA is terrible. It takes around 25 seconds to load. However the same url can be accessed within 6 seconds from Mumbai, India.
Lot of my files are already minimized, images are compreseed etc.
So what are my options at this time? I can try to do subdomains and have nginx do County based routing to different servers but what impact will it have on socket.io?
Will i have to do have nginx on each individual servers as well? Or just in routing server? What about nginx caching? On which site will i create the server which does routing?
Any examples will be greatly appreciated! Thanks in advance

I ended up using cloudflare for CDN. I saw significant improvement in speed.

Related

Nods js Deployment since all my servers are very old (any alternatives.)

All the servers which i have are windows servers which are very old.
What would be an alternate to deploy the Node js tool which i have developed.
Any free servers on-line or from Local,is it possible to make it available to everyone.
Please help.
When you are developing it's typical to run your Node apps locally on your own computer and use localhost to connect to them. But to make it available to the wider public then you really need to get a server. The cheapest options are Virtual Private Servers. The only free option to have such a server that I know about is Digital Ocean when you register with a coupon link to get 10 usd coupon to use on their servers, which is two months for their cheapest server - which is actually quite nice: 512MB RAM, 20GB SSD disk, and 1000GB monthly transfer. The billing is actually per hour so if you want a server just once in a while for some tests, that 10 usd credit can potentially last you for years. There are no limits on domains that you host and you can choose an operating system from: Ubuntu, Debian, Fedora, CoreOS, CentOS, FreeBSD. You can choose location in New York, San Francisco, Toronto, London, Frankfurt, Singapore, Bangalore and Amsterdam - whatever place is closest to you or your users.

Host my own node.js webapp or deploy using a hosting service

I have a web app that's like a bulletin board where users upload their images and such. Is it best if I host this web app on my own hardware or use a hosting site?
The reason why I'm considering my own hardware is because it's my hardware which will be simple to me and something I know best, also I'll have file system access and see what the users are uploading, most hosting sites don't offer that .
You might want to base your choice depending on what's your favourite operating system for servers. If it's on window, I would recommend self hosting as most VPS with window cost way more than linux server. Overall, depending on the resources you need, the average cost of a linux server in the cloud is around 2.50 ~ 10.00 per months and with that you have a guaranteed fix IP address.
Soooo the question is, is all the trouble of setting up your server + maintaining it + managing a fixed IP with your ISP (plus ISP charges *) worth at worst 10 box per month? Your choice!
here is a few services you might want to consider for VPS
https://aws.amazon.com/
https://www.digitalocean.com/
https://www.heroku.com/ (this one have free hosting if you don't mind the 24/7 uptime)
and there is so many others.
As an example, my ISP charge around 100 CAD more per month just to consider my IP static and have a commercial profile for my internet.
You can host on VPS. It can have any operation system you want. And you control it. It is much cheaper that dedicated hosting on your hardware.
http://www.google.com/search?q=vps
If you want to host node. You have to install manually, NodeJS, and the required NPM modules.

Best practices for shared image folder on a Linux cluster?

I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.

How do I increase concurrent requests in Digital Ocean Droplet?

I have optimized a Digital Ocean Droplet (Ubuntu 12.04) using Varnish in front of Apache to serve thousands of web requests per second. When I ssh into my droplet and run
ab -n 100 -c 100 FULL_URL
I get 2000-3000 requests per second even when I raise the value. Varnish is working great.
However, when I run the same ApacheBench command from my local computer, I get all kinds of timeouts whenever concurrency goes above 20 or 30.
Why can my site handle hundreds of concurrent requests locally but only 20 or 30 through the Internet?
I have followed the guidelines of this blog: http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/ thinking that the problem is with the OS tcpip settings, but after a droplet reboot, there is no change.
Can someone help me?

linode vs amazon ec2 vs heroku for project with amazon s3

I have a project in ruby on rails 3.1 like flickr, tumblr, pinterest...etc with a lot of pictures for maintenance.
My project have database Mongodb.
I'm using amazon s3 for host pictures.
I want to know what is the best hosting combination to get the most quality/price, linode + s3, or amazon ec2 + s3 or heroku + amazon s3.
I need enough scale because the project is growing fast.
Any other suggest is welcome :D.
After much reading I am not clear.
If you want to save the most money then I'd go with linode (Amazon ec2 might cost about the same though). With linode for $19.95/month you get 20gb of space where you can host your website and database. If you're using s3 then you can use most of the 20gb for your database. Not only that but on linode the addons that would cost you money on Heroku will be free (solr/sphinx, background jobs, email, etc). Compare this to Heroku where a 20gb shared database alone costs $15/month. Then you need to pay monthly if you want solr, background jobs, etc.
On linode it's free because you run and maintain your own virtual private server (VPS). Which brings me to one of the most important things to consider here, linode will save you money but it will cost you more time since you have to manage everything yourself.
For what it's worth, I am currently in the process of moving much of my hosting over from Heroku to Linode because of the costs involved and because as a rails developer I feel it's important to understand how to manage my own webserver.
There are a lot of other advantages to having your own VPS though. For example, hosting multiple website, creating multiple databases used by other web apps, your own email server, etc.
Update: April 2014
An even cheaper alternative to linode is digitalocean. Their cheapest plan is currently $5/month.
Just for a performance point of view, you'll get better performance if you use EC2 or Heroku since both are parts of the Amazon infrastructure (Heroku runs on EC2).
But it will only benefits if your pictures are processed by your Python server. If your pictures are served directly to the client, it will not have any impact to use Linode :)

Resources