Docker : How to run grunt-open? - browser

I have been using a grunt-open package for open my browser when i build my project. Recently I begin to use docker and this works perfectly, But the grunt-open task don't works anymore.
Exist some way to create a bridge between my docker and my local machine for opens my browser using grunt-open?

There is no way to open an external browser if you are running or building your project inside a docker container. The idea of using docker is to have all the tools you need inside the container.
You can use an gui less browser like PhantomJS and run grunt-open task inside the docker container.

There is no "automatic" way - you would need to have some kind of listener on your local machine. So you can't really use grunt-open from the container but there are any number of ways you could have the grunt task in the container send a call to your local machine which could use grunt-open (or npm-open which it's a wrapper for, or opn which npm-open is a wrapper for) -- or a simple shell script.

Related

Yarn is slow and freezes when run via docker exec

I recently started using docker (desktop version for Windows) for my node project development. I have a docker-compose file with volume configuration to share the project source files between my host machine and docker container.
When I need to install a new mode module, I can't do that on my host machine, of course, because it's Windows and docker is Linux or something, so I run docker exec -it my-service bash to "get into" the docker container and then run yarn add something from inside it. The problem is - yarn runs extremely slow and freezes almost all of the time. The docker container then becomes unresponsive, I cannot cancel the yarn command or stop the container using docker-compose stop. The only way I've found to recover is to restart the whole docker engine. So then, to finally install the new module, after docker engine restarts, I delete the node_modules folder and do the same steps again. This time it's still extremely slow, but it doesn't freeze somehow and actually installs the new module. But after some time, when I need to do that again, it freezes again and I have to delete node_modules again...
I would like to find the reasons why the yarn command is so slow and why it freezes.
I'm new to docker, so maybe my workflow is not correct.
I tried increasing RAM limit for docker engine from 2 GB to 8 GB and CPUs limit from 1 to 8, but it had absolutely no effect on the yarn command behavior.
My project was using file watching with chokidar, so I thought maybe that could cause the problem, but disabling it had no effect either.
I also thought the problem could be the file sharing mechanism between host machine (Windows) and docker container, but if it is the case, I do not know how to fix it. I suppose I then should somehow separate node_modules from the source directory and make them private to docker container, so that they are not shared with host machine.
This is quite a severe problem, as it slows the development down a lot. Please share any of your ideas about what could be wrong. I would even consider changing my development environment to Linux if the problem was caused by the file sharing mechanism between Windows and docker container.

Python Script to run commands in a Docker container

I am currently working on automating commands for a Docker container with a Python script on the host machine. This Python script for now, builds and runs a docker-compose file, with the commands for the containers written into the docker-compose file and the Dockerfile itself.
What I want to do is have the Python script action all commands to run within the container, so if I have different scripts I want to run, I am not changing the container. I have tried 2 ways.
First was to run os.system() command within the Python script, however, this works only as far as opening the shell for the container, the os.system() command does not execute code in the Docker container itself.
The second way uses CMD within the Dockerfile, however, this is limited and is hard coded to the container. If I have multiple scripts I have to change the Dockerfile, I don't want this. What I want is to build a default container with all services running, then run Python scripts on the host to run a sequence of commands on the container.
I am fairly new to Docker and think there must be something I am overlooking to run scripted commands on the container. One possible solution I have come across is nsenter. Is this a reliable solve and how does it work? Or is there a much simpler way? I have also used docker-volume to copy the python files into the container to be run on build, however, I can still not find a solve to automate the accessing and running these python scripts from the host machine.
If the scripts need to be copied into a running container, you can do this via the docker cp command. e.g. docker cp myscript.sh mycontiainer:/working/dir.
Once the scripts are in the container, you can run them via a docker exec command. e.g docker exec -it mycontainer /working/dir/myscript.sh.
Note, this isn't a common practice. Typically the script(s) you need would be built (not copied) into container image(s). Then when you want to execute the script(s), within a container, you would run the container via a docker run command. e.g. docker run -it mycontainerimage /working/dir/myscript.sh

Why do tutorials that dockerize Node.js require you to also install Node.js on the host?

What's the point of having Node.js and Vue.js installed on my host and then also getting a Node/Vue image for Docker? Every Vue.js tutorial says to install Node and Vue to the host first and then get the Docker image, is this not redundant?
Examples:
https://morioh.com/p/3021edac7ef1
https://jonathanmh.com/deploying-a-vue-js-single-page-app-including-router-with-docker/
https://mherman.org/blog/dockerizing-a-vue-app/
I'm using a Windows 10 host and was trying to avoid installing Node and Vue to Windows if possible, unless there are particular advantages to doing so, which hopefully someone can enumerate. Otherwise, maybe someone can confirm that it's redundant to also install Node/Vue on the host and state why it's silly and redundant.
Like you say, it is redundant but easier. A container is a running instance of an image, an image that was created (probably) using a Dockerfile with the instructions, so how would you go about doing everything from the container?
Would you add the creation of the app to the Dockerfile or would you connect to the container using bash and run the commands from there? If you connect with bash you'll lost everything once you remove the container. Once your app is created inside your container how would you get it out? I mean you need to write your app's code. You could store you data using docker volumes but that gets complicated depending were you are running Docker. For example on Mac a virtual machine is created for Docker, so to find that data you'll need to connect to the virtual machine...
It is just easier to do all of that from your local machine and use docker to host your app.

WebStorm Node.js debug with Docker

I'm trying to debug a Node.js script with WebStorm 2019.3 and Docker as a remote Node interpreter. So far I can start the script, debug it, but any changes done on local do not trigger a nodemon restart of the script inside the Docker container (files inside the container ARE actually changing, I've checked).
Any ideas? I'll attach the WebStorm run config.
I think there is something wrong about the way that I'm using nodemon when starting the script, but I have no idea how to fix it for WebStorm config.
Looks like you might need to enable legacyWatch.
According to the documentation:
In some networked environments (such as a container running nodemon reading across a mounted drive), you will need to use the legacyWatch: true which enables Chokidar's polling.
Via the CLI, use either --legacy-watch or -L for short: nodemon -L

Run nodejs in sandbox with virtual filesystem

I am working on a project of online python compiler. When user sends a python, Server will execute it. What I want do is,create a sandbox with virtual filesystem, execute that script instide it, and that sandbox should far from real-server's filesystem, but nodejs should be able to control stdin and stdout of that sandbox.
How to make it possible?
Docker is a great way to sandbox things.
You can run
docker run --network none python:3
from your node.js server. Look at other switches of docker run to plug in as many security holes as possible.
The shtick is, you run the docker command from your node.js server and pass the user's python code via stdin.
Now, if your node.js server is on one machine and the sendbox should run on another machine, you tell docker to connect to the other machine using the DOCKER_HOST environment variable.
Docker containers wrap up the software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — basically anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
This might be worth to read https://instabug.com/blog/the-difference-between-virtual-machines-and-containers/

Resources