Server (failover) redundancy solution for Heroku app - node.js

I have currently a node.js app deployed on a free web Dyno running on Heroku. As planning to make it production, I need to think about a redundancy and failover solution at a reasonable cost.
As I ran "Prodiction Check" on the Heroku Dashboard, it gave me a list of things to do to make it production. One of the things is "Dyno redundancy" that I should have at least 2 web dynos running for failover. Does it mean I should upgrade my Free Dyno to Hobby or Standart 1X, and should I also need to have two dyno of the same type, e.g. two Hobby dynos or two Standard 1X dynos?
How does Heroku handle failover from one Dyno to another one?
Thanks!

Heroku shares traffic between all available dynos, distributing requests using a random assignment algorithm. So all your dynos will always be serving incoming traffic.
This provides redundancy, not failover. If one dyno is choking on a very slow request, the app will still be available via the other dynos.
Failover is different. In the case of an application failure (say, the database is inaccessible) Heroku's router offers little help. To deal with more industrial workloads, you could use Amazon Route 53's DNS-level failover, which runs a health check against the backend and will reroute the domain name in the case of a Heroku crash.
However for many use-cases it is probably enough to simply offer a friendly, customised HTTP 503 error page, which you can configure in Heroku, to keep users happy during an outage.

Related

How to implement cache across multiple dynos

Let's say I have Node/express app hosted on Heroku. I have implemented scalability using horizontal scaling by spanning a server across multiple dynos.
I have CMS panel to control content of the app which alters the DB to add content then content is presented to end-users throughout the server API.
What I want is to add cache mechanism to back-end API to make less trips to DB because I have huge traffic during the day by app users.
The solution initially can be done be setting up a simple cache using node-cache package which set in each server instance (dyno). But how to flush the cache through the CMS.
If I send a request to flush the cache, it will only trigger a single dyno each time. So the data isn't consistent across all dynos.
How to trigger flush cache on all dynos or is there a better way to handle caching?
Instead of a per-dyno cache, you can use something outside of your dynos entirely. Heroku offers several add-ons for common products. I've never used node-cache, but it is described like this:
A simple caching module that has set, get and delete methods and works a little bit like memcached.
That suggests that Memcached might be a good choice. The Memcached Cloud addon has a free 30MB tier and the MemCachier addon has a free 25MB tier.
In either case, or if you choose to host your cache elsewhere, or even if you choose another tool entirely, you would then connect each of your dynos to the same cache. This has several benefits:
Expiring items would then impact all dynos
Once an item is cached via one dyno it is already in the cache for other dynos
Cache content survives dyno restarts, which happen at least daily, so you'll have fewer misses
Etc.

microservice architecture in node js

If I use different ports for different services and share the same database then can I call it microservice architecture? Like running orders services in port 8080 and running checkout service in 8081 port.
Unfortunately no, it's far more complex and I suggest investing some hours in reading on internet about it from people who will be able to explain it a lot better than me, but long story short at least you should have:
independent applications, each one with its own database. Of course each app on different ports
you most likely want to containerize them with docker and I personally suggest orchestrating the containers with kubernetes and in this case you will need a deployment file and a clusterIP service for each application ( read more into 'pods' , 'deployments' etc on kubernetes )
the independent services should not reach each other, in stead you use what is called event bus. You'll want to read more about that
You can sync each serice's database through the event bus. You'll duplicate some data but this is an acceptable thing because storage is cheap and service independence is far more important.
You'll need a Load Balancer to access your app eventually of course
At least this is what came in my mind in the first place. You can use it as a starting point and look into this more but as a heads up, it should not be easy. Data consistency throughout the services usually is hell.

How to keep my heroku app working during new version deployment?

I've noticed that my app stop working briefly (all processes are killed) during the deployment of a new build. This cause the failure of many critical requests that are crucial for the life of the whole application.
Any suggestion to keep my apps working no matter what and especially in case of new version deployment ?
Don't use the free tier dynos for production apps. Free tier only allows one dyno, so you can't have one spinning up while the other is acting as your production server. Hence, they shut down during new deploys. Paid tier stays up until the new dyno is ready then a swap is done.
If you aren't using free tier dynos, this is rather interesting. Unless you did a "restart all dynos", that shouldn't be the case. You could try using multiple dynos then. Typically heroku does a sequential shutdown for multiple dynos, so one remains running.

How to warm up a Heroku Node.js server?

Heroku reboots servers everyday. After reboot, my node server takes around 20 seconds to load a page for the first time. Is there a way to prevent this?
EDIT: You guys seem to be misunderstanding the situation. In Heroku, even production servers must be restarted daily. This is not the same as a free server sleeping. This question is aimed more at preventing lazy-loading and pre-establishing connection pools to databases.
Old question, but in case others stumble upon it like I did
Use can use Heroku's Preboot feature:
Preboot changes the standard dyno start behavior for web dynos. Instead of stopping the existing set of web dynos before starting the new ones, preboot ensures that the new web dynos are started (and receive traffic) before the existing ones are terminated. This can contribute to zero downtime deployments
You could also combine it with a warmup script like the one described in this Heroku post

Are databases attached to dynos in heroku?

I want to try out heroku, but am not quite sure if I understand all terms correctly.
I have an app with node.js and redis & my main focus is scaling and speed.
In a traditional environment I would have two servers in front of a load balancer; both servers are totally independent, share the same code and have an own redis instance. Both servers don't know of each other (the data is synched by a third party server, but that is not of interest for that case).
I would then push a load balancer in front of them. Know I could easily scale, as both instances are not aware of each other and I could just add more instances if I wish.
Can I mirror that environment in a dyno or can't I attach a redis instance to a dyno?
If something is unclear, please ask, as I'm new to paas!
As I understand it: I would have a dyno for my node-app and would just add another instance of it. That's cool, but would they share the same redis or can I make them independent?
You better forget traditional architectures and try to think it this way:
A dyno is a process processing HTTP requests, the absolute minimum of an app instance on heroku.
For one application instance you can have as many dynos you want and
it is totally transparent . No need to think about servers, load
balancing, etc... everything is taken care.
A redis instance is a basically a service used by the application
instance and therefore by one or more dynos. Again, servers, load
balancing, etc all is taken care.
Maybe you want to review the How it works on heroku.com now again.
You can have as many dynos for one URL as you want - you just change the value in the controller. This is actually one of the best features of Heroku - you don't care about servers, you increase the number of dynos and by this increase the number of requests which can be processed simultaneously.
Same thing with redis - it basically doesn't work that you add instances, you just switch to a more performant plan, see https://addons.heroku.com/redistogo. Again, forget about servers.

Resources