Heroku reboots servers everyday. After reboot, my node server takes around 20 seconds to load a page for the first time. Is there a way to prevent this?
EDIT: You guys seem to be misunderstanding the situation. In Heroku, even production servers must be restarted daily. This is not the same as a free server sleeping. This question is aimed more at preventing lazy-loading and pre-establishing connection pools to databases.
Old question, but in case others stumble upon it like I did
Use can use Heroku's Preboot feature:
Preboot changes the standard dyno start behavior for web dynos. Instead of stopping the existing set of web dynos before starting the new ones, preboot ensures that the new web dynos are started (and receive traffic) before the existing ones are terminated. This can contribute to zero downtime deployments
You could also combine it with a warmup script like the one described in this Heroku post
Related
I have a REST api node app.
Once its running on localhost, it runs until I stop the dev debugging, no errors.
I moved it over to my cPanel hosting, installed a node app.
It starts up the same as localhost.
But after 30 minutes being idle, it shuts down.
The next request after this, restarts the app.
There are no crash or errors in the log, just the restarting messages.
I know this is default behaviour for free hosting, like Heroku but I'm paying for this hosting package.
Does anyone know...
Is this default behaviour for cPanel hosted node apps, or is my app causing this (using too much memory or cpu for example?
Is there any settings that can be edited to change this?
According to the docs, cPanel uses something called Phusion Passenger to run Node.js. In turn, Passenger docs show a default "idle time" of 5 minutes and a default of passenger_min_instances = 1. No idea if cPanel changes the defaults, or if the hosting provider did. I would recommend contacting the hosting provider about the issue in any case, and asking about these options specifically - they may be able to help or tune the service for you.
The startup time for a node app depends on what it's doing. A rest-api could be in the milliseconds, whereas a small Ai app loading a corpus or training a dataset (which mine was) could end up being 30 seconds plus. However the quantity of users did not warrant a dedicated server, so the work-around was to call the endpoint using a CRON, keeping the app alive.
Not perfect, but this type of thing may be useful if you are using aws lambda, which calls a 3rd party service, and which charges based on time taken. Every millisecond counts.
I've noticed that my app stop working briefly (all processes are killed) during the deployment of a new build. This cause the failure of many critical requests that are crucial for the life of the whole application.
Any suggestion to keep my apps working no matter what and especially in case of new version deployment ?
Don't use the free tier dynos for production apps. Free tier only allows one dyno, so you can't have one spinning up while the other is acting as your production server. Hence, they shut down during new deploys. Paid tier stays up until the new dyno is ready then a swap is done.
If you aren't using free tier dynos, this is rather interesting. Unless you did a "restart all dynos", that shouldn't be the case. You could try using multiple dynos then. Typically heroku does a sequential shutdown for multiple dynos, so one remains running.
I have real time data incoming and I need to process it 24h/7, but:
a) Heroku will restart Dynos once a day.
b) Heroku will restart Dynos when code is updated.
Point a, can be more or less handled by having multiple dynos, if one restarts, the other is still there.
But for point b, I don't see how I can handle it. If all dynos restart for an update, I'll lose data until they are up again.
Is there any solution?
Yes, you can enable preboot on your dynos.
Preboot changes the standard dyno start behavior for web dynos. Instead of stopping the existing set of web dynos before starting the new ones, preboot ensures that the new web dynos are started (and receive traffic) before the existing ones are terminated. This can contribute to zero downtime deployments.
heroku features:enable preboot -a <myapp>
Do read the entire docs - there are some important considerations / caveats to be mindful of, like the fact that deployments will now take several minutes to switch to the newer set of dynos.
I've noticed that my NodeJS application which resides on a Heroku Hobby server after no activity does a "soft restart". By that I mean it doesn't do big actions like reinitialize the ORM system or recreates the HTTP server, however it does seem to forget callback functions and global variables or any variables that were dynamically created and held in memory.
Does Heroku still "sleep" even with the Hobby plan or is it something related to NodeJS?
As indicated by Heroku's documentation, Hobby dynos do not sleep.
However, dynos of all types are restarted frequently:
Automatic dyno restarts
The dyno manager restarts all your app’s dynos whenever you:
create a new release by deploying new code
change your config vars
change your add-ons
run heroku restart
Dynos are also restarted (cycled) at least once per day to help maintain the health of applications running on Heroku. Any changes to the local filesystem will be deleted.
I'm not entirely clear what you mean by
it does seem to forget callback functions and global variables or any variables that were dynamically created and held in memory
but at least some of these things could happen due to automatic dyno restarts. Certainly anything that only exists in memory will be lost.
You could manually restart your dynos using heroku ps:restart and see if that replicates the behaviour you are seeing. You may need to adjust your code to survive being restarted.
I have currently a node.js app deployed on a free web Dyno running on Heroku. As planning to make it production, I need to think about a redundancy and failover solution at a reasonable cost.
As I ran "Prodiction Check" on the Heroku Dashboard, it gave me a list of things to do to make it production. One of the things is "Dyno redundancy" that I should have at least 2 web dynos running for failover. Does it mean I should upgrade my Free Dyno to Hobby or Standart 1X, and should I also need to have two dyno of the same type, e.g. two Hobby dynos or two Standard 1X dynos?
How does Heroku handle failover from one Dyno to another one?
Thanks!
Heroku shares traffic between all available dynos, distributing requests using a random assignment algorithm. So all your dynos will always be serving incoming traffic.
This provides redundancy, not failover. If one dyno is choking on a very slow request, the app will still be available via the other dynos.
Failover is different. In the case of an application failure (say, the database is inaccessible) Heroku's router offers little help. To deal with more industrial workloads, you could use Amazon Route 53's DNS-level failover, which runs a health check against the backend and will reroute the domain name in the case of a Heroku crash.
However for many use-cases it is probably enough to simply offer a friendly, customised HTTP 503 error page, which you can configure in Heroku, to keep users happy during an outage.