AWS Elastic Beanstalk instance keeps deleting installed binaries after it restarts - node.js

I have a Node.js application deployed on Elastic Beanstalk (configured using CodePipeline). In that application, I have several of the cron jobs scheduled (using npm module: node-cron) at different times of the day to serve different purposes as per the need.
Using logs, I observed that I was running a cron job that was memory intensive, and was taking up most of the instance resources which caused it to restart the EB instance on its own. To solve this issue, I deployed the service as a Lambda function, which reduced the load by a great margin. Things were running smoothly for most of the days but even then, the issue of the instance restarting automatically still persists.
During that restart, it wipes all the installed binaries which I have to manually install like mongodump - for backing up production database and uploading the file onto AWS S3. I have to install such binaries all over again from scratch every single time.
I've been stuck on this problem for months now, and I've been wasting so much time fixing it manually. Any help will greatly be appreciated. Thanks in advance!

Related

Processing termination of activated Application Host after creation of Service Fabric application

We have roughly 12 Service Fabric clusters running in Azure. They are running correctly in both our Production and Test environments. We have found recently that one of them will not start locally. We have not ran this one locally in quite a while, and I am having a hard time tracking down what might have happened that is causing this error. It is happening on any machine I try to run locally on.
Specifically, after the type is registered, and the app created, the host process immediately terminates:
"Message": "EventName: ApplicationProcessExited Category: StateTransition EventInstanceId 158f38d1-47ac-4b70-9830-0d8d3cdf8f9c ApplicationName fabric:/Office.Ocv.CustomerTalkback.AutomatedService.ServiceFabric Application terminated, ServiceName=fabric:/Office.Ocv.CustomerTalkback.AutomatedService.ServiceFabric/MS.Internal.Office.Ocv.Services.CustomerTalkback.Automated, ServicePackageName=MS.Internal.Office.Ocv.Services.CustomerTalkback.Automated.Package, ServicePackageActivationId=d58e53d1-af22-42fb-9003-3154bcb8d00b, IsExclusive=True, CodePackageName=Code, EntryPointType=Exe, ExeName=MS.Internal.Office.Ocv.Services.CustomerTalkback.Automated.exe, ProcessId=16756, HostId=e27ccd9d-cff6-4317-b168-5a4b7b724808, ExitCode=2147516563, UnexpectedTermination=True, StartTime=06/18/2019 15:47:26. ",
This is dotnet core 2.2.0. All of our Service Fabric apps are running with the same settings/dependencies, etc. Only this one fails locally.
I have tried moving the local cluster to a larger drive (800 GB free); deploying manually via PowerShell (usually VS 2019).
Any help (even if it is just a suggestion of trouble shooting steps) would be much appreciated as I have working on this for about 16 hours over last three days.
thanks!
The problem I had was with the full name of the assembly. The local path was short (like d:\src\adm), but the full assembly name was ~65 characters. It appears as though the PowerShell that deploys locally would fail silently on this. When I dropped the length of the name down to about 35 characters it started working.

Beanstalk fails to abort deploy on npm start failure

We have some apps running on Beanstalk and several environments, so I'm pretty experienced with configuring it and running those apps in Beanstalk and NodeJS.
But there is one issue I could never understand or find a solution for:
If, for some reason, npm start (or whatever is configured to be the start command) fails and the exit code is different than 0, instead of aborting the deploy and setting it as a failed one, it just keeps trying, in an eternal loop. With time this consumes all the CPU and there were situations where Beanstalk completely lost control of everything and an enviroment rebuild was the only option.
So, I did manage to work around it, using the Immutable deploy method. With this option, the deploy WILL fail, because the instances Health will never be green and the whole thing will rollback. But I'm not really happy with this because 1) it doesn't address the issue, only covers it and 2) immutable deploys take forever to complete/rollback.
Any ideas?

How can I deploy a web process and a worker process with Elastic Beanstalk (node.js)?

My heroku Procfile looks like:
web: coffee server.coffee
scorebot: npm run score
So within the same codebase, I have 2 different types of process that will run. Is there an equivalent to doing this with Elastic Beanstalk?
Generally speaking Amazon gives you much more control than Heroku. With great power comes great responsibility. That means that with the increased power comes increased configuration steps. Amazon performs optimizations (both technical and billing) based on what tasks you're performing. You configure web or worker environments separately and deploy to them separately. Heroku does this for you but in some cases you may not want to deploy both at once. Amazon leaves that configuration up to you.
Now, don't get me wrong, you might see this as a feature of a heroku, but in advanced configurations you might have entire teams working on and redeploying workers independent from your web tier. This means that the default on Amazon is basically that you set up two completely separate apps that might happen to share source code (but don't have to).
Basically the answer to your question is no, there is not something that will allow you to do what you're asking in as simple a manor as with Heroku. That doesn't mean it is impossible, it just means you need to set up your environments yourself instead of Heroku doing it for you.
For more info see:
Worker "dyno" in AWS Elastic Beanstalk
http://colintoh.com/blog/configure-worker-for-aws-elastic-beanstalk

Not able to run meteor in cloud ide, need help to understand meteor memory usage

I’m new to both meteor and web frameworks [Core C/C++ developer].
When I tried meteor apps in cloud IDE (both cloud9 and Koding), sample apps runs fine. But, if I add twbs:bootstrap package, the IDE kills meteor (mongodb) due to insufficient memory (Cloud9 has 768MB and Koding provides 1GB).
Also noted that the disk space grows from 60mb initial to some 200+ mb, just for adding one package (twbs:bootstrap).
Hence, I’m not able to proceed further with meteor in cloud. Is it normal that meteor uses this much RAM and disk space? If so, why it uses such huge memory? This wouldn’t be problem for real production web apps?
Please guide me.
The first time you install a package, and start Meteor, it tries to update the package and Meteor (if there's a newer version). This can take up a lot more memory than usual. I have been able to get around this by running meteor update and then restarting the meteor server. Please note that sometimes even meteor update complains of being out of memory, but it should still complete. If it truly runs out of memory, it would say 'Killed' on the terminal. Contact support in this instance.
I have tried using the bootstrap package and have been able to make it work on Cloud9 workspaces using the technique above (Full disclosure, I work at Cloud9). We do try to keep the meteor version up to date due to this issue, but if you have an older workspace, you might still run into this issue each time meteor version increases.
The other thing I've noticed is that memory consumption tends to increase with each hot-reload. If the workspace starts complaining, simply shut the meteor server down and restart it. It should get back to normal levels.
Hope this helps!

Does AWS EB instance automatically restart when crashed?

I have a NodeJS instance running with Amazon Elastic Beanstalk. I would like to know if the instance will automatically restart if nodejs crash the server ?
Do I have to use foreverjs ?
Thank you
TLDR - Use foreverjs.
So there are two types of restarts. One is where the code throws an exception and stops node. The OS is still running. In this case, from the OS perspective, node decided to exit. None of it's business. This is where foreverjs plays a role - it'll watch node and restart it if it ever stops due to an exception/error etc.
The second type of restart is a machine reboot. This is something that you might want to do if there is a kernel panic etc. AWS will not automatically reboot; it won't do anything that your desktop would do. You're going to have to reboot it (but really - try and debug it before having it serve production traffic again). I've run a fair number of servers and this isn't a common issue. The best way to deal with this is to have redundancy and have other servers step in if one fails in such a stark manner.

Resources