Where to store configuration of an elastic beanstalk application? - node.js

I create a small nodejs application which run on aws elasticbeanstalk. At the moment the application configuration is store in a json file. I want to create an frontend to manipulate some parts of this configuration and read about MEAN stack. But Amazon has no MongoDB support. So what is the best pratice in aws elasticbeanstalk to handle configurations for an application? To store this in S3 Bucket is very easy but I think the performace is not very good.
Best regards

How much configuration data are you talking about? If it is typical small amount, and it only changes once in a while, but you need it available each time the application restarts, S3 is probably the easiest and cheapest option. Spinning up a MongoDB instance, just to store a small amount of mostly-read-only data is probably overkill. What makes you think the performance is not very good?

AWS usually recommends DynamoDB for such cases, but in this case you are getting vendor lock in. Also choose of the configuration storage depend on requirements how fast new changes need to be applied to the instances?
Good option to use mysql as configuration db, because you avoid vendor lock in, you can deliver configuration changes as fast as they has been applied and in app can be used memcached interface of the mysql.

Related

Microservices on GCP

I am looking to use GCP for a micro-services application. After comparing AWS and GCP I have decided to go with Google because one major requirement for the project is to schedule tasks to run in the future (Cloud Tasks) which AWS does not seem to offer an equivalent of.
I am planning on containerizing my services and deploying to GCP using Cloud Run with a Redis cluster running as well for caching.
I understand that you cannot have multiple Firestore instances running in one project. Does this mean that all if my services will be using the same database?
I was looking to follow a model (possible on AWS) where each service had its own database instance that it reached out to.
Is this pattern possible on GCP?
Firestore indeed is for the moment limited to a single database instance per project. For performance that is usually not a problem, but for isolation such as your use-case, that can indeed be a reason to look elsewhere.
Firebase's original Realtime Database does allow multiple instances per project, and recently added a REST API for provisioning database instances. Since each Google Cloud Project can be toggled to also be a Firebase project, you could consider that.
Does this mean that all if my services will be using the same database?
I don't know all details of your case. Do you think that you can deploy a "microservice" per project? Not ideal, especially if they are to communicate using PubSub, but may be an option. In that case every "microservice" may get its own Firestore if that is a requirement.
I don't think one should consider GCP project as some kind of "hard boundaries". For me they are just another level of granularity - in addition to folders, etc.
There might be some benefits for "one microservice - one project" appraoch as well. For example, less dependent lifecycles, better (more accurate) security, may be simpler development workflows...

How to simple edit local config file throuth API

In all servers we got some .env files, which sets configs for server (Node.JS) on start.
Now I want to edit this files from admin pane (another web-service, working with main server through API).
Is there any best practices or just good ideas how can I realize that?
First idea - create another web-server on instance, which will have only two API endpoints (read, write) and which will restart server after editing configs. This idea looking too heavy.
Second idea is to create bash script, which will send requests to admin servers to take actual configs and rewrite local .env file if find some changes, but here will be a lot unnecessary requests. (Request every minute, but configs will change 1 time per month).
What do you think? Any ideas?
You have a couple of options and it depends primarily on your deployment strategy..
If you have a distributed environment and/or your configuration changes often (i.e.: running multiple docker containers, rotating keys, etc.) I'd highly recommend using a K/V store and reading configuration(s) dynamically during application start. Check out HashiCorp Vault, etcd or even mongodb.
If your configuration contains sensitive data definitely use something like HashiCorp Vault. If you use a configuration tool like ansible, it has ansible-vault which will encrypt your secret(s) at rest and decrypt them during deployment.
I would highly advise against storing (even potentially) sensitive data such as api keys, tokens, etc. in version control. This is a pretty big attack vector and will lead you down a dark road.
Worst case scenario use environment variables. Almost all CI/CD tooling supports these and you can maintain separation of concerns.

Easiest server and database services available for deploying an application (AWS specifically)

I have written a real-time multiplayer game and currently writing its server in NodeJS. I want my game to have login, level up etc, so I need to have a database. This is the first time I am deploying something and I am mostly self taught, so please correct me if I am mixing things up. Since this is my first trial, I do not want to make much commitment right away so I am looking for free options only. And since this should be a real-time game, I need a relatively fast server response. That is why I am looking for the easiest database and server provider that would do and I am aware that with those restrictions I have limited choices and functionality.
As far as I have read online, Heroku seems to be my simplest option for a server (that is why I started writing in NodeJS). However it seems like there is no free database service since all options on https://devcenter.heroku.com/articles/heroku-postgres-plans has monthly fee. I did not want to use Google App Engine since I am new (it certainly is not mentioned as beginner friendly).
So I have found AWS following Free Cloud Database Service for home development post, it seems like I could use Amazon Web Services as a server and database. However most posts I have encountered suggests Google App Engine or Heroku with little mention of AWS. Is this because I am mixing concepts up, or does AWS have drawbacks that I am not aware of? Do you think it is a good idea to use AWS for both as server and database, is it possible to use Heroku as server while using AWS as database or do you have any other suggestion?
Note: Sorry for the question bombardment but those are all related and I am sort of lost in this topic so I had to ask...
Use AWS EC2 for the server and RDS for the database. The reason why people use heroku is that it deploys to a custom url very quickly (it's easy to set up). Setting up AWS requires some knowledge of how servers work, but it's not that complicated (and it's free for small apps). Best of luck!

store the temporary data in couchbase or redis

I have a nodejs project that using couchbase as database.
Just wonder if I store the temporary data in
1.redis
or in
2.couchbase directly.
As I know there is socket delay for couchbase, I think store temporary data in redis while store the permanent data in couchbase is better.
Is there any person has the experience on this?
Your comment welcome
I'm a big Redis fan, but in this situation I would use Couchbase only.
Couchbase is rather efficient, and comparable to the performance of memcached when the working set of your data fits in memory. Most of the time, an extra caching layer on top of Couchbase is not useful.
That said, if you really need a caching layer, or simply some storage for temporary data, you can simply create a memcached bucket hosted in the Couchbase cluster. So you would have an "eventually persistent" bucket for your persistent data, and a memcached bucket for the temporary data.
The bucket types are described here:
http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#data-storage
In that context, adding Redis as a extra storage layer does not really make sense.
Couchbase has a managed cache built into it, even for Couchbase buckets. So it already has a caching layer and adding another one on top just sounds superfluous.
I am not sure what you mean by a socket delay in Couchbase. Can you perhaps explain more about that? That is not something I have ever seen before and sticks out as suspect to me. I would try and troubleshoot this and figure out what that is before looking to add redis to the mix and have yet another layer to manage and code against. Without know more about the socket delay, it is difficult to make more recommendations.
It's an old question, but I'll have my take at it as well, if nothing else then for the people coming across it via google, just as I did.
I agree with he accepted answer, in that CouchBase has the most recently used Documents in RAM. In that aspect, it does the same as Redis. The advantage of CouchBase is of course that the data can reliably spill over the RAM limit, and the server disk limit, automatically, by adding more nodes.
However, I have a project where I am considering using Redis along side CouchBase. It's basically thought as a caching server, but for the "calculated" items. Such as html-snippets or other things. CouchBase is a fantastic document store, but making lists and other structures, doesn't come that easy, especially not without a lot of views. So I'm thinking to use Redis as a temporary datastore for the ad-hoc data manipulation needed, and CouchBase as the main datastore.

Should I store an image in MongoDB or in local File System (by Node.js)

I use Node.js for my project.
Should I store an image in local file system, or should I store it in MongoDB?
Which way is more scalable?
The most scalable solution is to use a shared storage service such as Amazon's S3 (or craft your own).
This allows you to scale horizontally a lot easier when you decide to add machines to your application layer, as you won't have to worry about any migration nightmares.
The basic idea behind this is to keep the storage layer decoupled from the application layer. So using this idea you could create a node.js process on a separate machine that accepts file uploads then writes them to disk.
If you're designing a performance sensitive system, use file system to store your images no doubt.
You can find the performance compare from this blog:
http://blog.thisisfeifan.com/2013/12/mongodb-gridfs-performance-test.html
Actually, you can find the recommended MongoDB GridFS use cases here:
https://docs.mongodb.com/manual/core/gridfs/#when-to-use-gridfs
I would use GridFS to take advantage of sharding but for best performance I would use filesystem with nginx.

Resources