docker-compose / nginx / SPA - node.js

I want to use docker-compose with two containers, a nginx and another one that have a node.js application. The node.js application is a Single Page Application and an API express server.
I want that nginx serves the static files from the SPA. The problem is that my app container compiles the SPA when it starts and then I do not have the files for the nginx container.
I do not want to create a data volume for it, as I want the "composed" environment to not depend on external state.
I think something on a transient volume (a volume that is started with docker-compose up) and then removed, but this feature seems to not exist.
Another way be to serve the static files by NFS in the app container and let nginx read them, but not sure about how good or bad would be this.
What's the best practice to run this environment?

Related

How to package MEAN stack application as docker container with nginx?

I've made an app using Angular, NestJS (Node.js) and MongoDB and I'm wondering how to easily turn it into a simple docker container, which will contain nginx sever for serving the frontend app and reverse proxy for the backend + the MongoDB. Also it would be cool if there was option for automatic Let's encrypt cert renewal etc.
Is there some pre-made package/template I could just clone, replace the app files and immediately just run? If not, I'd appreciate at least link to some guide on how to build such container myself - It's probably not super hard, but I've never really built anything with Docker (I do have some basic knowledge how it works) and my nginx experience is also very limited...
The expected result should be all-in-one docker app that I can easily provide to anyone, so they can just easily run it with something like docker run -p 80:80 image.

Vue.js on local, NGINX docker container, how to save files with Node?

I apologize if this is a general question, but apparently what I'm googling doesn't make sense, and I don't have anyone at my work I can ask.
Here's my situation:
Simple Vue.js app created with the Vue CLI 3.5, with an <input type="file">. Nothing hooked up yet when the form submits.
Docker container with Node and NGINX that pulls in and builds my Vue app - this is working. I basically copied the Dockerfile right from the vue.js site.
Now I need to store the file from the web app (and eventually FTP it to another server).
It was suggested I use Node, but I'm new to all the server-side stuff, so I don't know how to do that. My container has both Node and NGINX in it running the app, so can I stick with one container? And how do I build locally for development and then transfer that to the Docker image?
I'm just looking for pointers/articles/tutorials to help me think about this the right way.
Attention opinion:
In my experience NGINX is more of a router managing traffic.
Its in the name of Node to make a single one per task. So () i think its uesfull to split those.
Node is similar to "frotnend" except you have access to common system apis - wrapped by node.
But basically like you consume rest or graphql apis you can do with node.
Best is to simply peek into the basics you most likely gonna need:
file system, stream,...
https://nodejs.org/api/

MEAN Stack using Docker containers

New to this...
I'm trying to understand if a modern MEAN app should be deployed with 3 or 2 Docker containers:
Option 1: Express Server as container + Mongo DB as container
Option 2: All three as separate Docker containers
The second option sounds like the appropriate path so you can update any part of the stack without taking down other components if you don't want to. But then the question is does the ng app container need it's own server to serve the ng app files. I'm seeing some examples on Github where they are running the ng app with ng serve -H 0.0.0.0 from the Docker container which from my understanding is a no-no because that's not a prod ready server, just webpacks dev server.
To me, if you run all three separately then you actually need two servers, one to server the ng app (index.html, js, css, etc.) and the other to sever the backend app, the API.
The advantage I see if you run the Express Sever + ng app in one container then you can serve the initial index.html with ng app dependencies AND the API but then they both go down when they get updated.
What's the best practice here?
IMHO 2 Containers seems like the better solution with one for Mongo and one for Express. Anytime you're pushing new code, it doesn't make sense to have a front end still up if back end is down or vice versa. Also serving front end files from the same server reduces headaches of dealing with CSRF.
Regarding your other question, I think you can deploy your front end to something like AWS S3 and still only manage one server for your backend.
on a side note, you could also do it all in one container. It really depends on your other requirements to figure out the best architecture.

running nodejs app inside go

I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives

Restricting access to mounted /var/run/docker.sock

I am currently developing a webapp using docker-compose and Docker. Currently, there is a front-end Nginx reverse proxy-server in one container and a Rails app in another container.
Sometimes, the Rails app needs to make changes to the Nginx configuration files. I've implemented this by mounting the configuration directory as a shared volume in both containers.
However, to force Nginx to reload its configuration files after the Rails app modifies it, it needs to send a HUP signal to the Nginx process. At the moment, I am implementing this by mounting the host's /var/run/docker.sock into the Rails app container and using a gem to ask the host Docker to send the signal to the right container.
This works fine but now I'm worried about security. If the Rails container is compromised, then the attacker will have root access to the host.
I thought about creating another container who's sole job is to broker access to the socket and exposing a limited API to the main Rails app. But then we run into the same problem of what happens when the broker is also compromised. Not only that but surely there's an easier way?
I searched for some solutions to limit which APIs can be called on /var/run/docker.sock but I wasn't able to find any solutions.
Does anyone have any ideas? Perhaps there is some other way I can reload the Nginx configuration files without having to go through the Docker API?

Resources