I have a node app exposes a REST API. When it receives a http request, it starts another/different node app, let's call it 'service app'.
The REST app runs inside a container and the easiest way to start the service app is to just call child_process.exec (we just pm2 though) but then they run inside the same container. If REST app gets multiple requests this one container solution just won't scale.
So is it possible that the REST app can start the service app running inside its own container? If yes how to do that?
Someone also suggested me to run my REST app in docker swarm so when it gets the request it just starts another docker service for the service app. But I have no idea how to do that or even it is possible?
I am new to docker, any advice is highly appreciated. Thanks!
You can control docker from inside of container by for example bind mounting /var/run/docker.sock file into container itself (-v flag to docker run). But be very careful, if someone will gain access to it, it will be more or less equal to giving him root access to the machine. Safest way would be to create 2nd REST app that runs in separate container and can start new containers when asked. Then you could just invoke it from 1st app and be sure that it will start only container with your app and nothing else.
Related
Hi I currently have 1 container running my frontend application which includes a server side part written in nodeJS and a client side one written in React. To run the entire application I have to run 3 scripts:
CLIENT: One for building and watching the client side code
SERVER: One for building and wathing the server side part
START: One to start the node application
I've just created a Docker container to build and start all my application but I need a way to run these 3 watcher commands with a separate log output. How may achieve my goal ?
I have a website hosted on Heroku and Firebase (front (react) and backend(nodejs)) and I have some "long running scripts" that I need to perform. I had the idea to deploy a node process to my raspberry pi to execute this (because I need resources from inside my network).
How would I set this up securely?
I think I need to create a nodejs process that checks the central server regularly if there are any jobs to be done. Can I use sockets for this? What technology would you guys use?
I think the design would be:
1. Local agent starts and connects to server
2. Server sends messages to agent, or local agent polls with time interval
EDIT: I have multiple users that I would like to serve. The user should be able to "download" the agent and set it up so that it connects to the remote server.
You could just use firebase for this right? Create a new firebase db for "tasks" or whatever that is only accessible for you. When the central server (whatever that is) determines there's a job to be done, it adds it to your tasks db.
Then you write a simple node app you can run on your raspberry pi that starts up, authenticates with firebase, and listens for updates on your tasks database. When one is added, it runs your long running task, then removes that task from the database.
Wrap it up in a bash script that'll automatically run it again if it crashes, and you've got a super simple pubsub setup without needing to expose anything on your local network.
I have deployed a nodejs app to my AWS EB instance (with MySQL inside EB too), but my nodejs is not creating any server, is just a background task: couple of websockets that I want to keep connected 24/7 to save data in mysql.
It seems to be working, but maybe is not safe to do that, because AWS is showing some warnings, it says the http requestes are not working. Which is obvious but not sure if could be a side effect, I want to be sure my nodejs+mysql app will be running 24/7 forever.
It's totally safe to do that.
My guess is that the warning you see it's because Beanstalk is trying to guess if your environment is healthy or not.
Maybe you can expose an endpoint that returns 200 OK and set up the monitoring to check that URL.
Another way, not recommended, it's to disable the monitoring.
TL;DR: how to attach a docker container bash to a node.js stream?
I need to implement a relay between a docker container bash and the final user. The app is a remote compile/run for c/cpp, Python and JS. Some references (repl.it, cpp.sh). To accomplish that my plan it's to:
Instantiate a Ubuntu docker container with the requirements for
compiling and running the user code.
Prompt some bash commands for compiling/running the user code.
And finally, output the resulting from the bash console to the user.
I've found some repo's with interesting code they are: compilebox, dockerrode and docker-api.
The 1st, do the task using containers and some promise/async black magic to compile, pipe out to a file and send to the user through HTTP (get/post). My problem with this one is because I need to establish a shell-like environment for my user. My goal is to bring a bash window over the browser.
The 2nd and 3rd implements API based on the official HTTP docker engine API (I took the v1.24 because this one has an overview for layman's like me). Both have examples of some sort of IO Stream between two entities. Like this, the duplexstream, but because of some mistake of implementation, the IO doesn't work properly (Issue#455).
So my problem is: how to attach a docker container bash to a node.js stream? So when it's done, everything that the user types in the app on the browser it's sent via HTTP to the bash container and the output is sent back as well.
Rationale:
I am using Docker in Docker (dind) with --privileged flag in my CI to build images out of source code. I only need build, tag, pull, and push commands and want to avoid all other commands such as run (considered as root of all security problems).
Note: I just want to restrict Docker's remote API and not the daemon itself!
My best options so far:
As Docker clients communicate with dind over HTTP (and not socket), I thought I could put a proxy before dind host and filter all the paths (e.g. POST /containers/create) to limit API access only to building/pushing images.
What I want to avoid:
I would never ever bind mount the docker socket on the host machine!
Update:
It seems that the API routers are hardcoded in Docker daemon.
Update 2:
I went with my best option so far and configured an nginx server which blocks specific paths (e.g. /containers). This works fine for building images as it is done in the dind image and my API restrictions doesn't screw the build process.
HOWEVER: this looks really ugly!
Docker itself doesn't provide any low level security on the API. It's basically an on or off switch. You have access to the entire thing, or not.
Securing API endpoints would require modifying Docker to include authentication and authorisation at a lower granularity or, as you suggested, adding an API proxy in between that implements your security requirements.
Something you might want to look at is Osprey from Mulesoft. It can generate API middleware including authentication mechanisms from a simple RAML definition. I think you can get away with documenting just the components you want to allow through...
#%RAML 0.8
title: Yan Foto Docker API
version: v1
baseUri: https://dind/{version}
securitySchemes:
- token_auth:
type: x-my-token
securedBy: [token_auth]
/build:
post:
queryParameters:
dockerfile: string
t: string
nocache: string
buildargs: string
/images:
/{name}:
/tag:
post:
queryParameters:
tag: string
Osprey produces the API middleware for you controlling everything, then you proxy anything that gets through the middleware to Docker.
You could use OAuth 2.0 scopes if you want to get fancy with permissions.
The docker client is a bit dumb when it comes to auth, but you can attach custom http headers to each requests which could include a key. config.json can configure HttpHeaders.
From a theoretical perspective, I believe the answer is no. When you build an image, many of the build steps will create and run a container with your requested command. So if you manage to disable running containers, the side effect of that should be to also disable building images. That said, if you protect access to running docker commands from a trusted user, and that user builds an untrusted Dockerfile, that result of that build should be isolated to the container as long as you aren't removing container protections with various CLI options.
Edit: I haven't had the time to play with it myself, but twist lock may provide the functionality you need without creating and relying on an api proxy.