A microservice relay between docker container bash and final user - node.js

TL;DR: how to attach a docker container bash to a node.js stream?
I need to implement a relay between a docker container bash and the final user. The app is a remote compile/run for c/cpp, Python and JS. Some references (repl.it, cpp.sh). To accomplish that my plan it's to:
Instantiate a Ubuntu docker container with the requirements for
compiling and running the user code.
Prompt some bash commands for compiling/running the user code.
And finally, output the resulting from the bash console to the user.
I've found some repo's with interesting code they are: compilebox, dockerrode and docker-api.
The 1st, do the task using containers and some promise/async black magic to compile, pipe out to a file and send to the user through HTTP (get/post). My problem with this one is because I need to establish a shell-like environment for my user. My goal is to bring a bash window over the browser.
The 2nd and 3rd implements API based on the official HTTP docker engine API (I took the v1.24 because this one has an overview for layman's like me). Both have examples of some sort of IO Stream between two entities. Like this, the duplexstream, but because of some mistake of implementation, the IO doesn't work properly (Issue#455).
So my problem is: how to attach a docker container bash to a node.js stream? So when it's done, everything that the user types in the app on the browser it's sent via HTTP to the bash container and the output is sent back as well.

Related

How can I emulate my host system terminal on the browser?

Aim:
I am trying to use my Linux terminal from my web application.
Setup
NodeJS
ReactJS
Ubuntu Server 20.04
Description
I want to create an application so that I can write commands in my browser, and NodeJS executed them on the backend and return the output.
Now, I know this is very simple using the spawn process, and I can await response and then send it back to the frontend, but I want it to be dynamic. The data is sent in real-time. So if the apt install command is being executed, I could see the stdout in real time on my browser window. How can I achieve it? Even if I use a web socket, how do I receive it from the terminal in real-time, because afaik, spawn process returns the output all at once.

Node.js I/O streams: piping output all the way back to web server

A little bit of background
Below you will find a diagram of the relationship between the different components of a Node app I'm currently working on. Here is the link on GitHub. It is an application that I use to archive videos which have strong journalistic importance, and between the moment I watch them, and the moment I get the time to use them for my reports, they are usually removed from youtube. Hence, by archiving them, this information no longer gets lost.
What I'm trying to achieve in plain English
download_one_with_pytube.py is basically a piece of code that downloads a video given an id, and so it reports its progress of the download by printing to the console the percentage of that progress.
What I'm trying to achieve in terms of output piping
Here is a pseudo shell set of piped commands
Array of IDs of videos | for each URL | python download(video Ids) | print progress | response.send(progress)
The difficulty I have is to actually spawn the python code passing it the video id dynamically, and then pipe the progress all the way back to the server's response.
Resources I've consulted & Stuff I tried
I've tried, the whole day yesterday, without success, implementing my own Classes of objects inheriting from EventEmitter, or even implementing my own deplex stream class to pipe that output all the back to my express web server so that progress can be served to the browser.
Advanced Node.js | Working with Streams : Implementing Readable and Writable Streams
Util | Node.js v9.3.0 Documentation
How to create duplex streams with Node.js - a Nodejs programming tutorial for web developers | CodeWinds
class Source extends Readable
Pipe a stream to a child process · Issue #4374 · nodejs/node-v0.x-archive
Developers - Pipe a stream to a child process
Deferred process.exit() to allow STDOUT pipe to flush by denebolar · Pull Request #1408 · jsdoc3/jsdoc
The problem
I think the problem is that i get confused with the direction the pipes should take.
What I've managed so far
All i've managed to do is 'pipe' the outpout of the python script back to downloadVideos.js
How the app is strutured
Through express (server.js in the diagram), I exposed my node app (running through a forever daemon) so that devices on the same LAN as the server can access [server IP address]:3333/startdownload and trigger the app execution.
Looking at concrete lines of code in my repo
How can I pipe the output of this console.log here all the way back to server at this line of code here ?
A simple working example using Node's included http
I've got a GIST here of http server running that illustrates what I'm trying to achieve. However due to my app architecture being more real world than this simple example, I have several files and require statements in between the output I'm trying to pipe and the res.send statement.
Conclusion
I really appreciate any help anyone can provide me on this.
We could code together live using Cloud9 shared workspaces making this process easier.
Here is the link to application., but I would have to send an invite for it to be accessible I guess.

Start a node app from another node app run inside docker container

I have a node app exposes a REST API. When it receives a http request, it starts another/different node app, let's call it 'service app'.
The REST app runs inside a container and the easiest way to start the service app is to just call child_process.exec (we just pm2 though) but then they run inside the same container. If REST app gets multiple requests this one container solution just won't scale.
So is it possible that the REST app can start the service app running inside its own container? If yes how to do that?
Someone also suggested me to run my REST app in docker swarm so when it gets the request it just starts another docker service for the service app. But I have no idea how to do that or even it is possible?
I am new to docker, any advice is highly appreciated. Thanks!
You can control docker from inside of container by for example bind mounting /var/run/docker.sock file into container itself (-v flag to docker run). But be very careful, if someone will gain access to it, it will be more or less equal to giving him root access to the machine. Safest way would be to create 2nd REST app that runs in separate container and can start new containers when asked. Then you could just invoke it from 1st app and be sure that it will start only container with your app and nothing else.

Is it possible to limit docker daemon to only buiding images (and not running containers)?

Rationale:
I am using Docker in Docker (dind) with --privileged flag in my CI to build images out of source code. I only need build, tag, pull, and push commands and want to avoid all other commands such as run (considered as root of all security problems).
Note: I just want to restrict Docker's remote API and not the daemon itself!
My best options so far:
As Docker clients communicate with dind over HTTP (and not socket), I thought I could put a proxy before dind host and filter all the paths (e.g. POST /containers/create) to limit API access only to building/pushing images.
What I want to avoid:
I would never ever bind mount the docker socket on the host machine!
Update:
It seems that the API routers are hardcoded in Docker daemon.
Update 2:
I went with my best option so far and configured an nginx server which blocks specific paths (e.g. /containers). This works fine for building images as it is done in the dind image and my API restrictions doesn't screw the build process.
HOWEVER: this looks really ugly!
Docker itself doesn't provide any low level security on the API. It's basically an on or off switch. You have access to the entire thing, or not.
Securing API endpoints would require modifying Docker to include authentication and authorisation at a lower granularity or, as you suggested, adding an API proxy in between that implements your security requirements.
Something you might want to look at is Osprey from Mulesoft. It can generate API middleware including authentication mechanisms from a simple RAML definition. I think you can get away with documenting just the components you want to allow through...
#%RAML 0.8
title: Yan Foto Docker API
version: v1
baseUri: https://dind/{version}
securitySchemes:
- token_auth:
type: x-my-token
securedBy: [token_auth]
/build:
post:
queryParameters:
dockerfile: string
t: string
nocache: string
buildargs: string
/images:
/{name}:
/tag:
post:
queryParameters:
tag: string
Osprey produces the API middleware for you controlling everything, then you proxy anything that gets through the middleware to Docker.
You could use OAuth 2.0 scopes if you want to get fancy with permissions.
The docker client is a bit dumb when it comes to auth, but you can attach custom http headers to each requests which could include a key. config.json can configure HttpHeaders.
From a theoretical perspective, I believe the answer is no. When you build an image, many of the build steps will create and run a container with your requested command. So if you manage to disable running containers, the side effect of that should be to also disable building images. That said, if you protect access to running docker commands from a trusted user, and that user builds an untrusted Dockerfile, that result of that build should be isolated to the container as long as you aren't removing container protections with various CLI options.
Edit: I haven't had the time to play with it myself, but twist lock may provide the functionality you need without creating and relying on an api proxy.

restart nodejs server programmatically

User case:
My nodejs server start with a configuration wizard that allow user to change the port and scheme. Even more, update the express routes
Question:
Is it possible to apply the such kind of configuration changes on the fly? restart the server can definitely bring all the changes online but i'm not sure how to trigger it from code.
Changing core configuration on the fly is rarely practiced. Node.js and most http frameworks do not support it neither at this point.
Modifying configuration and then restarting the server is completley valid solution and I suggest you to use it.
To restart server programatically you have to execute logics outside of the node.js, so that this process can continue once node.js process is killed. Granted you are running node.js server on Linux, the Bash script sounds like the best tool available for you.
Implementation will look something like this:
Client presses a switch somewhere on your site powered by node.js
Node.js then executes some JavaScript code which instructs your OS to execute some bash script, lets say it is script.sh
script.sh restarts node.js
Done
If any of the steps is difficult, ask about it. Though step 1 is something you are likely handling yourself already.
I know this question was asked a long time ago but since I ran into this problem I will share what I ended up doing.
For my problem I needed to restart the server since the user is allowed to change the port on their website. What I ended up doing is wrapping the whole server creation (https.createServer/server.listen) into a function called startServer(port). I would call this function at the end of the file with a default port. The user would change port by accessing endpoint /changePort?port=3000. That endpoint would call another function called restartServer(server,res,port) which would then call the startServer(port) with the new port then redirect user to that new site with the new port.
Much better than restarting the whole nodejs process.

Resources