Pomelo app kill and remove servers automatically - node.js

What are the circumstances to kill pomelo app itself?
The issue I am facing is that, the pomelo kills its own application without throwing any exception. When I try to run command 'pomelo list', it shows no server is running. The app gets close after few hours of running and automatically as well. I have gone through logs generated by pomelo on each server but there were no exception or blockage for the app.
Below are the details that might help you guys:
We are using distributed server structure of pomelo framework
MongoDB on different server
Redis instance on master-server itself
Pomelo version 1.2.1
We are also running same architecture for our different multi-player games but haven't gone through such issues ever. Can anybody explain why it might be happening? If you need any other info then please ask for the same.

Related

ASP.NET Core IHostedService stops when hosted on linux

I have an ASP.NEt Core 3.1 application with Angular 8 frontend. It runs fine when hosted on IIS but as I have moved it onto a new Ubuntu 18 Server with Nginx above Kestrel sometimes the long running background processes stop working (IHostedService). Then the app runs towards accepting new requests so only the background process is stopped.
These processes get files from clients and give immediate responses with a process ids. The clients can query the process state by their id. Everything have been running fine for months now on IIS but the new config must have some limits that kills these processes. I suppose there is some kestrel or nginx option I don't know about and affect processes started by http requests.
What options can I try and where can I get some logs?
I've tried to log everything from .net core but even the most verbose logs are useless here. Nginx logs doesn't contain any info about the stopped process either.
Although the application runs fine hosted on IIS I tried to find catch blocks without any output and added logging into them but still nothing. Are there anything I can add to my application globals to log any exceptions handled or unhandled?
I forgot to say that I use a local Microsoft SQL Server Express both on windows and linux. The linux Sql Server install was done by the official ms docs (as dotnet and nginx config, too). The database is restored from a windows sql server backup. The connection string is the same with multipleresultsets=true. Are there any differences I should aware of?
For anyone getting here in the future: this was caused by a bug in Microsoft.Data.SqlClient, so I had to update it (independently from EF Core 3.1.2) from nuget to the newer 1.1.2 version.
When it stucked I had two threads waiting for each other, both in SqlClient. With Just my code enabled VS debugger stopped at one of my linq queries. The only interesting part was that it never threw any exceptions and there was no deadlock event on the sql server either. It just waited there so all logs were empty.
https://github.com/dotnet/efcore/issues/18480
https://github.com/dotnet/SqlClient/issues/262

"Error: read ECONNRESET" on Node-RED when writing to InfluxDB

I have just started with Node-RED and InfluxDB, and I would like to apologise if this is a very silly question.
There was a network disconnection on my server earlier - after reconnecting the server back to the network, the error Error: read ECONNRESET is frequently showing whenever receiving an MQTT signal and trying to write it into influxdb.
A little bit of the background on my work - I am working on an Industrial IoT project, where each machines will send in signals via MQTT to Node-RED, get processed in Node-RED and log into influxDB. The code has been running without issue before the network disconnection, and I have seen other posts stating that restarting Node-RED would solve the problem - but I cannot afford to restart it unless schedule a time with the factory - till then, more data will be loss.
"Error: read ECONNRESET"
This error is happening at many different influxdb nodes - not a single specific incident. Is there anyway to resolve this without having to restart Node-RED?
Thank you
Given that it's not storing any data at the moment, I would say take the hit and restart Node-RED as soon as possible.
The other option is if you are on a recent Node-RED release is to just restart the flow. You can do this from the bottom of the drop down menu on the Deploy button. This will leave Node-RED running and just stop all the nodes and restart them. This will be quicker than a full restart.
I assume you are using the node-red-contrib-influxdb node. It looks to be using the Influx npm node under the covers. I can't see anything obvious in the doc about configuring it to reconnect in case of a failure with the database. I suggest you set up a test system and then try and reproduce this by restarting the DB, if you can then you can open an issue with the node-red-contrib-influxdb on github and see if they can work out how to get it to reconnect after a failure.
There was a power outage one day and have restarted the whole system. Now the database is working fine. It worked, and I didn't know why. Hope this would help.

MEAN Stack Express server going down

I am running a Node.JS + Angular JS application on a cloud server using the MEAN stack. The application is terminating every hour or sooner.
I have few thoughts and would like someone who can tell me which might a cause.
I am using SSH through root when I start the service using this command
NODE_ENV=production PORT=80 grunt serve:dist
Do I need forever to run this properly ?
Should I use a server user (similar to apache) that can run the application?
If yes how do I do this ?
We do not have a deployment engineer in our team but it is annoying to not being able to keep the app running on the server after developing the application. Please help diagnose the problem.
If you don't want to use a deployment service — MS azure, AWS, heroku, etc. (which would probably be a lot easier) — then yes, you would have to use something like forever to restart your sever every time it crashes. It's really odd that your app terminates after an hour though, it'd be helpful if you could describe why that's happening.

NodeJs not staying live in aws

I have deployed a Bitnami AMI of NodeJS on an AWS micro instance. After starting my node app, everything works fine.
After some time without any activity, the app which is attached to port :3000, seems to shut down. When this happens on refreshing the page my browser gives the message:
Network Error (tcp_error)
A communication error occurred: "Connection refused"
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.
The AWS console shows the instance is still running and the Bitnami build still responds with the standard message on port 80.
Forever (https://github.com/nodejitsu/forever) is also a useful tool for this kind of thing, and it gives you a little more control than nohup or screen.
As we discussed in comments, the problem was binding the node process to SSH session.
You can use nohup or screen to launch the node process in an instance not bound to session.
I suggest using screen because the function of returning to launched instance is essential for maintenance/updating.
Related: How to run process as background and never die
Related: Command-Line Interface tool to run node as a service
Besides configuring an EC2-instance you can also use the PaaS-solution of AWS, namely Elastic Beanstalk. They have also support for Node.js and it's super easy to deploy your apps using this service.

node.js app running using forever inaccessible after a while

I have a node.js server and a java client communicating using socket.io. I use this api https://github.com/Gottox/socket.io-java-client for the java client. I am using forever module
to run my server.
Everything works well but after some time , my server becomes inaccessible and I need to restart it, Also, most of the times i need to update/edit my node.js server file in order to make my server work again (restarted). Its been two weeks already and im still keep restarting my server :(.
Has anyone run into the same problem ? and solution or advice please.
Thanks

Resources