1.my technology stack for above application is expressjs, nodejs, mongoDB, redisDB, s3(storage).
2.API is hosted on Linux AMI
3.I need to create docker container image for my application.
First of all you will need to decide to either keep everything inside a single container (monolithic, cannot really recommend it) or separate the concern and run a separate express/nodejs container, a mongodb container, and a redisDB container, s3 is a service you cannot run for yourself,
If you chose the later approach, there are already officially supported images on the docker hub for redis, and mongo, now for the actual app server (node) you need to set express as a dependency on node and start the official node image with an npm install command (which would get express on it) and then npm start (or whatever command you use for it), dont forget to include your code as a volume for this to work,
Now, bear in mind that if your app uses any reference data inside mongodb, you should make sure to insert it when the mongodb container starts or create an image based on the official mongodb that already has said data on it!
Another valuable note is that you should pass all connections inside your expressjs app as env vars, that way you can change them when deploying your app container (useful for when you distribute your system accross several hosts),
At the end of the day you would then start the containers in this order: mongodb, redis, and node/express. Now, the connection to s3 should already be handled inside your node app, so it is irrelevant in this context, just make sure the node app can reach the bucket!
If you want just to build a monolithic container, just start with a debian jessie image, get a shell inside the container, install everything as you would on a server, get your code running and commit the image to your repo, then use it to run your app, Still i cannot recommend this approach at all!
BR,
Related
I apologize if this is a general question, but apparently what I'm googling doesn't make sense, and I don't have anyone at my work I can ask.
Here's my situation:
Simple Vue.js app created with the Vue CLI 3.5, with an <input type="file">. Nothing hooked up yet when the form submits.
Docker container with Node and NGINX that pulls in and builds my Vue app - this is working. I basically copied the Dockerfile right from the vue.js site.
Now I need to store the file from the web app (and eventually FTP it to another server).
It was suggested I use Node, but I'm new to all the server-side stuff, so I don't know how to do that. My container has both Node and NGINX in it running the app, so can I stick with one container? And how do I build locally for development and then transfer that to the Docker image?
I'm just looking for pointers/articles/tutorials to help me think about this the right way.
Attention opinion:
In my experience NGINX is more of a router managing traffic.
Its in the name of Node to make a single one per task. So () i think its uesfull to split those.
Node is similar to "frotnend" except you have access to common system apis - wrapped by node.
But basically like you consume rest or graphql apis you can do with node.
Best is to simply peek into the basics you most likely gonna need:
file system, stream,...
https://nodejs.org/api/
New to this...
I'm trying to understand if a modern MEAN app should be deployed with 3 or 2 Docker containers:
Option 1: Express Server as container + Mongo DB as container
Option 2: All three as separate Docker containers
The second option sounds like the appropriate path so you can update any part of the stack without taking down other components if you don't want to. But then the question is does the ng app container need it's own server to serve the ng app files. I'm seeing some examples on Github where they are running the ng app with ng serve -H 0.0.0.0 from the Docker container which from my understanding is a no-no because that's not a prod ready server, just webpacks dev server.
To me, if you run all three separately then you actually need two servers, one to server the ng app (index.html, js, css, etc.) and the other to sever the backend app, the API.
The advantage I see if you run the Express Sever + ng app in one container then you can serve the initial index.html with ng app dependencies AND the API but then they both go down when they get updated.
What's the best practice here?
IMHO 2 Containers seems like the better solution with one for Mongo and one for Express. Anytime you're pushing new code, it doesn't make sense to have a front end still up if back end is down or vice versa. Also serving front end files from the same server reduces headaches of dealing with CSRF.
Regarding your other question, I think you can deploy your front end to something like AWS S3 and still only manage one server for your backend.
on a side note, you could also do it all in one container. It really depends on your other requirements to figure out the best architecture.
I am trying to start to use NoFlo in my existing microservice architecture and I want to start out with a HTTP server so that I can mount it on my proxy and play/test with it.
You can find the repository here.
I am using Docker (Compose) to manage some services (with Dockerfile and start-docker.sh), but they also all have local startup scripts (start-local.sh). Both the scripts run NPM scripts to start the servers with their injected ENV vars.
I have some questions:
Should the starting point of the application be the server.js file, or a .fbp Graph?
What do I put in my package.json to start the server?
When I have started all the Docker containers with Docker Compose and the NoFlo Server is running, will I be able to program a HTTP server using Flowhub.io?
Whether you want to run your process with a custom Node.js script (and embed NoFlo inside), or whether you run NoFlo as the top-level control flow doesn't really matter that much.
For the former case, build and run your Docker image just like you would any other Node.js one.
For the latter case, you may want to execute the graph via noflo-nodejs. If you want to make the graph live programmable from the outside (with for instance Flowhub), you should also expose the FBP protocol port.
You can find a simple example of running a NoFlo graph via Docker here:
https://github.com/flowhub/bigiot-bridge/blob/master/Dockerfile
For easier switching between running in Docker, vs. running locally, one great option is to place the noflo-nodejs command to be the start script in package.json.
I am trying to dockerize a NodeJS application. I have followed this tutorial https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
It is really easy and works.
But I need extend functionality a little bit.
My infrastructure won't contain only the nodejs container, it will have much more containers that are linked together with the help of docker-compose file.
What I need
I am going to use https for my application. So I need to provide my SSL certificates in folder that will be mount to a host machine. But I guess I need to restart an express app in order to apply changes. How can I handle this use case ? In case I have other containers running.
I need to be able to restart nodejs app without restarting a container.
Could you please suggest the right strategy to follow in order implement this properly ?
I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives