Issues with a Node app deployed through elastic beanstalk - node.js

I'm deploying a node app to an ec2 instance through aws elastic beanstalk. I set up a cron job with the cron node package that, on tick, will run a sequelize query, parse the data returned, then send it in the body of an email.
When testing locally, it works fine and the email gets sent. When i deploy it using awsebcli command eb deploy, it says the deploy was successful, but I don't receive any emails.
At first I believed the npm start command wasn't working on the server, but I checked the error logs and it appears sequelize is throwing a time out error when trying to connect.
I wrote a configuration for sequelize to connect to multiple schemas at once. Three of those schemas are hosted on the same RDS, one on a seperate RDS.
I've done almost the exact same thing with another node app and it worked fine. The only thing different is the additional schema on a seperate RDS that I'm connecting to fine on my local machine.
Any thoughts or suggestions would be appreciated.
EDIT: Checked server logs and found Sequelize connection error.

Found that the issue was caused by security groups on aws preventing my instance from connecting to one of the DBs it needed.
Edit:
Specifics have been requested. Since this is a very old post and I don't have access to AWS anymore, I can venture a guess on what I did.
If my memory serves correct, the db I was blocked by was hosted in a different aws account. Changing the security group on the DB was not an option as security on that account was firmly maintained. The reason I was able to connect locally was because the facility I was working at had a whitelisted IP on the DB security groups. I eventually settled on running the script on my local machine, since my machine rarely left that location and it did not matter where the script was run, just that it ran periodically. Ideally though, I would have been able to change the security group on the db to allow incoming traffic from my server.

Related

Mongo connecting to localhost although the conn string is right accross the app

So I have an Node API hosted on AWS' Beanstalk and it's connected to a DocumentDB database hosted within the same VPC, unfortunately I can't share too much info on the project since I've inherited it, but currently we have a terrible logic of creating a new db for each tenant in the system so when they sign in we create a new mongoose connection (with the proper string). Upon that there is also a global connection to a central db which is supposed to be established upon the initial start of the API - this works as expected. What doesn't work for some reason is that whenever I make a request that's supposed to go through for a certain tenant I get the following error on AWS Beanstalk logs:
ERROR IMAGE
Unfortunately, I can't fully disclose the connection strings, but I can tell you that there are no references to "locahost" within the project (noted in the picture):
PROJECT SEARCH RESULT
Has anyone encountered similar issues ?

is safe to use AWS ElasticBeanstalk for background tasks? (no front accesible via HTTP)

I have deployed a nodejs app to my AWS EB instance (with MySQL inside EB too), but my nodejs is not creating any server, is just a background task: couple of websockets that I want to keep connected 24/7 to save data in mysql.
It seems to be working, but maybe is not safe to do that, because AWS is showing some warnings, it says the http requestes are not working. Which is obvious but not sure if could be a side effect, I want to be sure my nodejs+mysql app will be running 24/7 forever.
It's totally safe to do that.
My guess is that the warning you see it's because Beanstalk is trying to guess if your environment is healthy or not.
Maybe you can expose an endpoint that returns 200 OK and set up the monitoring to check that URL.
Another way, not recommended, it's to disable the monitoring.

How to solve the error SequelizeHostNotFoundError?

Problem essence
Writing the API server in NodeJS (Express) using the PostgreSQL database (connecting to it remotely through the service ElephantSQL and work through Sequelize). Today appeared an error "SequelizeHostNotFoundError". It occurs even in the checked endpoints.
Error text
(it remains only a screenshot of the error)
My attempts to solve the problem
Tried to perform a GET request to my API not via Postman and via the browser (did not help).
Tried to create a new DB on the same service ElephantSQL (didn't help, but migrations to create new tables and relationships somehow executed and endpoints still not working).
Tried to connect to the database directly via IDE DataGrip (connection test is successful and the database is loaded with all the tables).
What could be the problem ? On stackoverflow some wrote that the problem may occur due to the lack of a paid subscription to Google Cloude Functions, but I do not seem to use it. There is an option to connect to PostgreSQL locally, but I want to understand.

Node Mongodb error : No valid replicaset instance servers found

We have multiple APIs running on AWS instance on different ports. One of the API is giving error "No valid replicaset instance servers found". Other APIs is working fine. No messages in mongodb log. Tried all other options mentioned in stackoverflow such as increasing timeout etc. However did not help.
Same API works properly when run on local server. Restarted AWS instance and it started working fine.
Not able to replicate the issue. Any direction to check the root cause. Would like to avoid happening again. Would appreciate any areas to investigate further to find root cause.

PouchDb on PAAS (Heroku, Bluemix, etc)

I've gotten some great feedback from Stackoverflow and wanted to check on one more idea.
Currently I've got a webapp that runs nodejs on a PAAS (Heroku and trying out bluemix). The server is being configured to talk to a Couchdb (hosted on cloudant). There are two types of data saved to the db, first, user data (each user will have it's own database), and second, app data itself (metrics, user account info (auth/admin stuff).
After some great feedback from here, the idea is that after the user logs in, they will sync there local (browser) pouchdb instance with Cloudant (probably proxied through my server as was recommended here).
Now the question is, for the app/admin data, maybe I run a couchdb instance on my server so i'm not making repeated network calls for things like user logins, metrics data, etc. The data would not be very big, and is already separated from the user data calls. The point is to have a faster/local instance for authentication mainly, changes/updates get synced outside of user requests.
The backend is in express web framework and it looks like my options are pouchdb.... to sync to the Cloudant instance?
If I want local db access (backed a Couchdb instance), on a node/express server running on a PAAS, is that the recommended setup?
Thanks vm for any feedback,
Paul
Not sure if you found a solution, but this is what I would try.
Because heroku clears any temp data, you wouldn't be able to run a default express-pouch database, you will need to change pouch db from using file system to using LevelDOWN adapter.(Link to Pouchdb adapters: https://pouchdb.com/adapters.html)
Some of these adapters would include:
https://github.com/watson/mongodown
https://github.com/kesla/mysqldown
https://github.com/hmalphettes/redisdown
You can easily get heroku mondo, mysql, or redis addon, and connect that to you express-pouchdb backend.
This way you will be able to keep your data.

Resources