I am working on a project of online python compiler. When user sends a python, Server will execute it. What I want do is,create a sandbox with virtual filesystem, execute that script instide it, and that sandbox should far from real-server's filesystem, but nodejs should be able to control stdin and stdout of that sandbox.
How to make it possible?
Docker is a great way to sandbox things.
You can run
docker run --network none python:3
from your node.js server. Look at other switches of docker run to plug in as many security holes as possible.
The shtick is, you run the docker command from your node.js server and pass the user's python code via stdin.
Now, if your node.js server is on one machine and the sendbox should run on another machine, you tell docker to connect to the other machine using the DOCKER_HOST environment variable.
Docker containers wrap up the software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — basically anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
This might be worth to read https://instabug.com/blog/the-difference-between-virtual-machines-and-containers/
Related
What's the point of having Node.js and Vue.js installed on my host and then also getting a Node/Vue image for Docker? Every Vue.js tutorial says to install Node and Vue to the host first and then get the Docker image, is this not redundant?
Examples:
https://morioh.com/p/3021edac7ef1
https://jonathanmh.com/deploying-a-vue-js-single-page-app-including-router-with-docker/
https://mherman.org/blog/dockerizing-a-vue-app/
I'm using a Windows 10 host and was trying to avoid installing Node and Vue to Windows if possible, unless there are particular advantages to doing so, which hopefully someone can enumerate. Otherwise, maybe someone can confirm that it's redundant to also install Node/Vue on the host and state why it's silly and redundant.
Like you say, it is redundant but easier. A container is a running instance of an image, an image that was created (probably) using a Dockerfile with the instructions, so how would you go about doing everything from the container?
Would you add the creation of the app to the Dockerfile or would you connect to the container using bash and run the commands from there? If you connect with bash you'll lost everything once you remove the container. Once your app is created inside your container how would you get it out? I mean you need to write your app's code. You could store you data using docker volumes but that gets complicated depending were you are running Docker. For example on Mac a virtual machine is created for Docker, so to find that data you'll need to connect to the virtual machine...
It is just easier to do all of that from your local machine and use docker to host your app.
I am using docker and trying to enable kubernetes and set CPU and Memory via command line.
I have looked at this answer but unfortunately cannot find this file.
Is there any way to enable Kubernetes on Docker for Mac via terminal?
Docker does not have an app-ified version for Linux that I know of, so there is no relation to the Docker for Mac/Windows app. There are many tools to locally install Kubernetes on Linux so they probably didn't see much reason to make something new. Minikube is the traditional one, but you can also check out microk8s, k3s, KinD, and many others.
I am currently hosting my database for free on Openshift and have my program running on a linux box on my local server. I need to pass the data from the program to my openshift database. I want to run the linux box headless.
To do this I run the command:
rhc port-forward -a webapp
My question is how can I run this command permanently without it timing out (some checking to see if process is running?) and without a terminal running (background process)?
You could add that command in the startup settings of your Linux computer. So a systemd configuation, or an init one (details could depend upon your particular distribution and system). See systemd(1) and/or sysvinit
You could also use crontab(5). It can be used for periodic tasks, but also for started once tasks, thru some #reboot entry.
At last, you might use batch facilities, look into at (& batch)
Perhaps you may just want nohup(1) (or screen(1)...)
I have been using a grunt-open package for open my browser when i build my project. Recently I begin to use docker and this works perfectly, But the grunt-open task don't works anymore.
Exist some way to create a bridge between my docker and my local machine for opens my browser using grunt-open?
There is no way to open an external browser if you are running or building your project inside a docker container. The idea of using docker is to have all the tools you need inside the container.
You can use an gui less browser like PhantomJS and run grunt-open task inside the docker container.
There is no "automatic" way - you would need to have some kind of listener on your local machine. So you can't really use grunt-open from the container but there are any number of ways you could have the grunt task in the container send a call to your local machine which could use grunt-open (or npm-open which it's a wrapper for, or opn which npm-open is a wrapper for) -- or a simple shell script.
I have a Node.js application that I want to run on a Raspberry Pi.
And, I'd like to be able to deploy new version of my application as well as new versions of Node.js to that Raspberry Pi remotely.
Basically, something such as:
$ pi-update 192.168.0.37 node#0.11.4
$ pi-update 192.168.0.37 my-app#latest
I don't have any preferences on how to transfer my app to the Pi, may it be pushing or pulling. I don't care (although I should add that the code for the application is available from a private GitHub repository).
Additionally, once Node.js and / or my app were deployed, I want the potentially running Node.js app to restart.
How could I do this? Which software should I look into? Is this something that can easily be done using tools from Raspbian, or should I look for 3rd party software (devops tools, such as Chef & co.), or ...?
Any help is greatly appreciated :-)
a) For running the script continuously, you can use tools like forever or pm2, otherwise you can also make the app a debian daemon on raspian you can run with sudo <servicename> start (if you're running Arch Linux, this is handled differently I guess).
b) If your Raspberry is reachable from the internet, you can use a GitHub hook (API Documentation) to run every time you push a change to your repository. This hook is basically a URL endpoint on your Pi that runs a little shell script locally.
This script should shutdown you app gracefully, do a git pull for your repository and start the app/service again. You could also trigger this shell script over SSH from your local machine, e.g. ssh pi#192.168.0.37 /path/to/your/script
A update script could look like this:
# change the 'service' command to your script runner of choice
service <yourapp> stop
cd /path/to/your/app
git pull
service <yourapp> start
c) The problem with remote updating Node itself is, that the official binary builds for Raspberry Pi appear only very irregularly, otherwise it would be easy to just download/update the binaries with wget or curl. So most of the time you either need to cross compile Node on your own machine or spend about two hours to recompile it on your Pi. If you want to go with the unofficial builds on GitHub, you can install them with curl -# -L https://gist.github.com/raw/3245130/v0.10.17/node-v0.10.17-linux-arm-armv6j-vfp-hard.tar.gz | tar xzvf - --strip-components=1 -C /usr/local but you need to check the file name for every release.
Look no further than resin.io All you need to is flush your rpi with their image and then git push your project. resin.io will compile its code and dependencies for your device's architecture and send the result to your device(s) (in a docker file).
You can create a very simple continuous integration scheme using supervisor, which does two things:
keeps your process running even if it fails,
and restarts your process if any of the files changes.
It becomes a simple issue to keep your app updated: you just have to run the commands git pull; npm install: when code is downloaded (or even node modules change) supervisor will will restart the app automatically for you.
If the Raspberry Pi is visible from the internet you can use a GitHub webhook, pointing it to a very simple page that runs the commands git pull; npm install using child_process.exec(). (One important note: use a non-trivial URL (with a code or something) so that people don't run into it by mistake.) Otherwise just run those commands from the crontab every hour or so, for instance.
As for updating node.js itself, I would use the official Debian package, either from testing or getting it from unstable. Otherwise you would have to create a private repo to host your own packages, which probably is not worth the hassle; but is doable.