Node.JS with forever on Heroku - node.js

So, I need to run my node.js app on heroku, it works very well, but when my app crashes, i need something to restart it, so i added forever to package.json, and created a file named forever.js with this:
var forever = require('forever');
var child = new (forever.Monitor)('web.js', {
max: 3,
silent: false,
options: []
});
//child.on('exit', this.callback);
child.start();
forever.startServer(child);
on my Procfile (that heroku uses to know what to start) i put:
web: node forever.js
alright! Now everytime my app crashes it auto restarts, but, from time to time (almost every 1 hour), heroku starts throwing H99 - Platform error, and about this error, they say:
Unlike all of the other errors which will require action from you to correct, this one does not require action from you. Try again in a minute, or check the status site.
But I just manually restart my app and the error goes away, if I don't do that, it may take hours to go away by itself.
Can anyone help me here? Maybe this is a forever problem? A heroku issue?

This is an issue with free Heroku accounts: Heroku automatically kills unpaid apps after 1 hour of inactivity, and then spins them back up the next time a request comes in. (As mentioned below, this does not apply to paid accounts. If you scale up to two servers and pay for the second one, you get two always-on servers.) - https://devcenter.heroku.com/articles/dynos#dyno-sleeping
This behavior is probably not playing nicely with forever. To confirm this, run heroku logs and look for the lines "Idling" and " Stopping process with SIGTERM" and then see what comes next.
Instead of using forever, you might want to try the using the Cluster API and automatically create a new child each time one dies. http://nodejs.org/api/cluster.html#cluster_cluster is a good example, you'd just put your code into the else block.
The upshot is that your app is now much more stable, plus it gets to use all of the available CPU cores (4 in my experience).
The downside is that you cannot store any state in memory. If you need to store sessions or something along those lines, try out the free Redis To Go addon (heroku addons:add redistogo).
Here's an example that's currently running on heroku using cluster and Redis To Go: https://github.com/nfriedly/node-unblocker
UPDATE: Heroku has recently made some major changes to how free apps work, and the big one is they can only be online for a maximum of 18 hours per day, making it effectively unusable as a "real" web server. Details at https://blog.heroku.com/archives/2015/5/7/heroku-free-dynos
UPDATE 2: They changed it again. Now, if you verify your ID, you can run 1 free dyno constantly: https://blog.heroku.com/announcing_heroku_free_ssl_beta_and_flexible_dyno_hours#flexible-free-dyno-hours

Related

how to stop heroku from restarting dynos

I have a node.js app hosted on Heroku. I am paying the $7 a month hosting for the better plan which has me running with the next tier dynos and SSL. My problem is, I have a cronjob running in my app that runs every minute. It is VERY important this runs every minute and pretty much never misses. However, it happens to not run sometimes, and after debugging a little bit, I believe it to be that it restarts itself. like so:
So I was wondering if there is a way to schedule the app to restart instead of having it do it whenever, or if my cronjob is actually the problem and I can't do what I'm looking for. any ideas?
EDIT: here's the cronjob code:
var sendTexts = new CronJob('*/1 * * * *', function() {
// code that sends Texts if event is true
}, null, true)
and it should run every 1 minute. it does locally when my server is up, but again the issue seems to be with restarting dynos
Dynos are restarted (cycled) at least every 24 hours, if you restart manually (with heroku CLI for example) will reset the 24 hour period.
You could consider restarting you app every X hours to try to manage that, however you must consider:
Dynos can be restarted randomly by Heroku (after a platform error)
upon restart your chronojob starts immediately, so you are going to have executions before a whole minute is passed
You might want to consider an architectural change using a DB or a queue which allow you not to rely on the application always running.
In cloud-based architecture it is never a good idea to assume a single instance (container) is always available.

Why is my Azure node.js app becoming unresponsive?

I recently deployed a Node.js Backend Service to Azure and have the following problem. The service becomes unresponsive after a certain amount of time, and only comes back to life if a external request is sent. The problem is, that it takes about 3 minutes for the Container to start back up and actually return the request. I'm running Node 14 LTS. I also added a health check yesterday, but azure simply doesn't bother actually keeping the app alive, here is the metric off azure
I verified azure is actually trying to reach the correct endpoint, and it does. I also have "Always On" enabled. I also verified that the app itself, is not crashing. I log every request and all of a sudden requests are no longer received, which means the health endpoint doesn't respond either, but it does not result in a container restart. It just waits for an external request to appear and then decides to start everything back up, which takes too long.
I feel like it's some kind of configuration issue, because the app itself is not very complex and I never experienced crashes when doing local development.
The official document tells us that the Free pricing tier you are currently using, Always on does not take effect.
How do I decrease the response time for the first request after idle time?

Doing tasks before heroku nodejs server is ready

When deploying a new release, I would like my server to do some tasks before actually being released and listen to http requests.
Let's say that those tasks take around a minute and are setting some variables: until the tasks are done I would like the users to be redirected to the old release.
Basically do some nodejs work before the server is ready.
I tried a naive approach:
doSomeTasks().then(() => {
app.listen(PORT);
})
But as soon as the new version is released, all https request during the tasks do not work instead of being redirect to old release.
I have read https://devcenter.heroku.com/articles/release-phase but this looks like I can only run an external script which is not good for me since my tasks are setting cache variables.
I know this is possible with /check_readiness on App Engine, but I was wondering for Heroku.
You have a couple options.
If the work you're doing only changes on release, you can add a task as part of your dyno build stage that will fetch and store data inside of the compiled slug that will be deployed to virtual containers on Heroku and booted as your dyno. For example, you can run a task in your build cycle that fetches data and stores/caches it as a file in your app that you read on-boot.
If this data changes more frequently (e.g. daily), you can utilize “preboot” to capture and cache this data on a per-dyno basis. Depending on the data and architecture of your app you may want to be cautious with this approach when running multiple dynos as each dyno will have data that was fetched independently, thus this data may not match across instances of your application. This can lead to subtle, hard to diagnose bugs.
This is a great option if you need to, for example, pre-cache a larger chunk of data and then fetch only new data on a per-request basis (e.g. fetch the last 1,000 posts in an RSS feed on-boot, then per request fetch anything newer—which is likely to be fewer than a few new entries—and coalesce the data to return to the client).
Here's the documentation on customizing a build process for Node.js on Heroku.
Here's the documentation for enabling and working with Preboot on Heroku
I don't think it's a good approach to do it this way. you can use an external script ( npm script ) to do this task and then use the release phase. the situation here is very similar to running migrations you can require the needed libraries to the script you can even load all the application to the script without listening to a port let's make it clearer by example
//script file
var client = require('cache_client');
// and here you can require all the needed libarires to the script
// then execute your logic using sync apis
client.setCacheVar('xyz','xyz');
then in packege.json in "scripts" add this script let assume that you named it set_cache
"scripts": {
"set_cache": "set_cache",
},
now you can use npm to run this script as npm set_cache and use this command in Procfile
web: npm start
release: npm set_cache

Using 'cron' module in node.js on heroku server

I am using cron in node.js to schedule a function that sends text messages at user-determined times. It works on my local server, but when i deploy to heroku the functions never get called.
I'll elaborate a little bit more on what #rsp said above, so that if anyone else finds this question they'll understand why using Heroku Scheduler is the correct answer here.
When you're running software on Heroku, what happens is that Heroku will take your project, and run it on a random dyno (server) in the Heroku collection of servers on Amazon.
For a number of reasons (including to help distribute application load evenly across a large number of Amazon servers), Heroku will periodically move your dyno from one Amazon server to another. This happens many times per day, automatically, behind the scenes.
This means that your application code will be periodically restarting all the time when running on Heroku.
Now -- this isn't a bad thing from an end-user perspective, because while your application code is restarting, Heroku will queue up incoming requests, then just send them to your application once it's been successfully restarted on a new host. So to the end user, this behavior is 100% transparent.
What's important to know here, though, is that since your application code may randomly restart, you shouldn't use it to do things like run long tasks that take a while to finish, or queue up jobs to be executed in the future (what the cron module does).
Instead: Heroku created a free scheduler addon that you can use to basically say "Hey Heroku, run this Node script every minute|hour|etc."
The scheduler addon Heroku provides will reliably execute your cron task because Heroku keeps track of that timing stuff separately (outside of your application logic).
Hopefully this helps!
I'm using a cron job on heroku with node. Here is my top level stuff
var CronJob = require('cron').CronJob;
var dailyJob = new CronJob({
cronTime: '0 0 0 * * *',
onTick: function () {
// Do daily function
console.log('I get called 1 time a day.');
},
start: false
});
dailyJob.start();
On Heroku you may need to use the Heroku Scheduler.
See: https://devcenter.heroku.com/articles/scheduler

OpenShift HAProxy scaling is just not working

I've been trying to get OpenShift's HAProxy scaling working with ny NodeJS Express 4 app (it's essentially a REST API), but I haven't had much luck.
I'm using loader.io's stress testing tools, with a mere 100 users/minute (ramps up from 0), as I'm sure at least NodeJS/Express should be able to handle that. Now granted, this does generate roughly 10-20k requests in 60 seconds, but still.
What happens after the requests start pounding the server, is that I can see CPU go up, memory stays pretty solid and HAProxy's log file is letting me know that it's about to scale up.
It never does. HAProxy crashes before it can scale, then I lose the SSH connection to the OpenShift host. It comes back after a while, though.
At one point I did see that it was hitting the default 128 connection limit, then trying to spin up another gear, but since the requests kept coming in, I'm guessing it just couldn't handle it?
At first I thought that it was due to using a small gear, as I was running 'top' and saw that the CPU load spiked through the roof and I eventually disconnected.
I deleted the app and switched to small.highcpu gears (which cost money per hour).
Still crashes when it's supposed to scale up (with less than 100 concurrent users).
The small.highcpu gear does do something different though, because after it restarts, it adds a new gear, but it does NOT scale down (even though all traffic has stopped), so I have to manually scale down
If I leave the second gear up and try to stress test again with 100 users within 1 minute, HAProxy still goes down (memory usage and CPU seem to be OK) and I lose the SSH connection shortly afterwards. Also, this time it does NOT come up by itself. I also receive the following error in my NodeJS app:
{ [Error: socket hang up] code: 'ECONNRESET' }
{ [Error: socket hang up] code: 'ECONNRESET', sslError: undefined }
If I manually restart HAProxy after this (I kinda have to since it's not coming up), I can see that the local-gear is down, while the second gear is up, meaning that my NodeJS app crashed on the first gear, but stayed online on the second gear.
Is this really intended behaviour? Should I be doing something differently when dealing with NodeJS and HAProxy?
I really can't justify paying for a service such as this, if I can't even handle 100 users/minute, since I'm certain that I will eventually peak far beyond a 100.
UPDATE: Here's a loader.io graph/report, which kinda shows when HAProxy is giving up:
http://ldr.io/1tV2iwj
UPDATE 2: I tried using Blitz instead of loader.io, just to be certain on when HAProxy goes crazy. Blitz ended up with 12k hits, 26k errors and 4k timeouts.
Additionally, HAProxy went down and seemed like it would never come back up. This time I decided to wait, and after a few minutes, the local-gear DID come back up. It didn't bring up any additional gears, though.
Here's also what HAProxy was telling me when the Blitz test happened (before it crashed and I disconnected):
==> app-root/logs/haproxy_ctld.log <==
I, [2014-10-13T07:14:48.857616 #74934] INFO -- : add-gear - capacity: 143.75% gear_count: 1 sessions: 23 up_thresh: 90.0%
==> app-root/logs/haproxy.log <==
[WARNING] 285/071506 (74918) : Server express/local-gear is DOWN, reason: Layer7 timeout, check duration: 10002ms. 0 active and 0 backup servers left. 128 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 285/071506 (74918) : proxy 'express' has no server available!
[WARNING] 285/071511 (74918) : Server express/local-gear is DOWN for maintenance.
UPDATE 3: Tried again with Blitz, this time HAProxy/NodeJS didn't come back up, but instead got stuck on the following line (I can still SSH in):
DEBUG: Sending SIGTERM to child...
There's not much of a pattern here, except that HAProxy isn't doing what it's supposed to be doing: scaling.
I'm fairly confident that it's not my NodeJS app at fault here, as it's not reporting any errors (to the log file or to New Relic).
Your gear is running out of memory, and thus all of your processes are being killed. (that's why you are also getting kicked out of your ssh session.) When that happens, it could potentially put the haproxy configuration in a bad state, and if it does not automatically repair itself on a restart I would consider that to be a bug.

Resources