Yarn is slow and freezes when run via docker exec - node.js

I recently started using docker (desktop version for Windows) for my node project development. I have a docker-compose file with volume configuration to share the project source files between my host machine and docker container.
When I need to install a new mode module, I can't do that on my host machine, of course, because it's Windows and docker is Linux or something, so I run docker exec -it my-service bash to "get into" the docker container and then run yarn add something from inside it. The problem is - yarn runs extremely slow and freezes almost all of the time. The docker container then becomes unresponsive, I cannot cancel the yarn command or stop the container using docker-compose stop. The only way I've found to recover is to restart the whole docker engine. So then, to finally install the new module, after docker engine restarts, I delete the node_modules folder and do the same steps again. This time it's still extremely slow, but it doesn't freeze somehow and actually installs the new module. But after some time, when I need to do that again, it freezes again and I have to delete node_modules again...
I would like to find the reasons why the yarn command is so slow and why it freezes.
I'm new to docker, so maybe my workflow is not correct.
I tried increasing RAM limit for docker engine from 2 GB to 8 GB and CPUs limit from 1 to 8, but it had absolutely no effect on the yarn command behavior.
My project was using file watching with chokidar, so I thought maybe that could cause the problem, but disabling it had no effect either.
I also thought the problem could be the file sharing mechanism between host machine (Windows) and docker container, but if it is the case, I do not know how to fix it. I suppose I then should somehow separate node_modules from the source directory and make them private to docker container, so that they are not shared with host machine.
This is quite a severe problem, as it slows the development down a lot. Please share any of your ideas about what could be wrong. I would even consider changing my development environment to Linux if the problem was caused by the file sharing mechanism between Windows and docker container.

Related

Docker image fails to run on Raspberry Pi with strange Node error

I've been working on a project which I deploy via docker to a raspberry pi in my house. At this point, I'm probably ~10 updates into the process so I have already successfully run my project on docker on my RBP.
The pipeline is that I push my code to Github and a github action/workflow builds and pushes the image to Docker Hub. Then I SSH into my Raspberry Pi manually, pull the image from Docker Hub, and then run it.
Everything was working fine until I just made a few changes to the node app running inside the image. When I pull and run the image on the Raspberry Pi, I get a weird Node error... something about getting the time in microseconds?
Node.js[1]: ../src/util.cc:188:double node::GetCurrentTimeInMicroseconds(): Assertion `(0) == (uv_gettimeofday(&tv))' failed.
Note that I have made no changes to the deployment pipeline or process. Nor have I changed anything in the Dockerifle. The "breaking change" was essentially just re-arranging some express routes in the Node app, which I have un-done and re-deployed to Docker but still get the above error.
What's even more strange is that the image runs completely fine on my Macbook. See the image of two terminals, one ssh into the RBP and one on my Macbook. You can see I'm pulling the same image from dockerhub and running it on each machine with very different results. The Macbook terminal even shows an error because I've compiled the image with buildx to run on arm architecture... but it runs my code anyway.
I've searched for the node error a few different ways but I'm not finding anything. I basically have no idea what is going on and its completely stopped my progress. I've tried updating the Pi itself, turning it off/on, uninstall / reinstall docker, remove all docker images (you can see docker image ls as a command in the RBP terminal), and re-pushing my code to trigger another image build.
Any thoughts would be greatly appreciated! Even just how to get more verbose logs when the docker image is booting up. As you can see in the RBP terminal below, it shows the one error and exits.
Have you tried running the docker container with the argument --security-opt seccomp:unconfined?
I got this same error message on my Raspberry Pi. It triggered every time I ran either node or npm on any node image I could find. When I tried to dig deeper to investigate why that uv_gettimeofday(&tv) would fail inside the container I noticed that apt update was broken as well as described here:
https://askubuntu.com/questions/1263284/apt-update-throws-signature-error-in-ubuntu-20-04-container-on-arm
The solution to that issue, applying --security-opt seccomp:unconfined, when running the docker container, solved not just my apt problem but also my node and npm issue as well.
As for the underlying root cause to why seccomp settings would affect uv_gettimeofday, I have no idea.
I run into this problem with docker baseimage node:16.15.1-bullseye-slim, then I fallback to node:16.15.1-buster-slim, it works fine then.
Check updates at https://github.com/nodejs/docker-node/issues/1746

Why do tutorials that dockerize Node.js require you to also install Node.js on the host?

What's the point of having Node.js and Vue.js installed on my host and then also getting a Node/Vue image for Docker? Every Vue.js tutorial says to install Node and Vue to the host first and then get the Docker image, is this not redundant?
Examples:
https://morioh.com/p/3021edac7ef1
https://jonathanmh.com/deploying-a-vue-js-single-page-app-including-router-with-docker/
https://mherman.org/blog/dockerizing-a-vue-app/
I'm using a Windows 10 host and was trying to avoid installing Node and Vue to Windows if possible, unless there are particular advantages to doing so, which hopefully someone can enumerate. Otherwise, maybe someone can confirm that it's redundant to also install Node/Vue on the host and state why it's silly and redundant.
Like you say, it is redundant but easier. A container is a running instance of an image, an image that was created (probably) using a Dockerfile with the instructions, so how would you go about doing everything from the container?
Would you add the creation of the app to the Dockerfile or would you connect to the container using bash and run the commands from there? If you connect with bash you'll lost everything once you remove the container. Once your app is created inside your container how would you get it out? I mean you need to write your app's code. You could store you data using docker volumes but that gets complicated depending were you are running Docker. For example on Mac a virtual machine is created for Docker, so to find that data you'll need to connect to the virtual machine...
It is just easier to do all of that from your local machine and use docker to host your app.

Run nodejs in sandbox with virtual filesystem

I am working on a project of online python compiler. When user sends a python, Server will execute it. What I want do is,create a sandbox with virtual filesystem, execute that script instide it, and that sandbox should far from real-server's filesystem, but nodejs should be able to control stdin and stdout of that sandbox.
How to make it possible?
Docker is a great way to sandbox things.
You can run
docker run --network none python:3
from your node.js server. Look at other switches of docker run to plug in as many security holes as possible.
The shtick is, you run the docker command from your node.js server and pass the user's python code via stdin.
Now, if your node.js server is on one machine and the sendbox should run on another machine, you tell docker to connect to the other machine using the DOCKER_HOST environment variable.
Docker containers wrap up the software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — basically anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
This might be worth to read https://instabug.com/blog/the-difference-between-virtual-machines-and-containers/

ng build returns fatal out of memory exception in Docker

I'm trying to build the frontend of a web application in a Node.js Docker container. As I'm on a Windows PC, I'm very limited in my base Images. I chose this one, as it's the only one on DockerHub with a decent number of downloads. As the application is meant to run in Azure, I'm also limited to Windowsservercore 2016. When I run the following Dockerfile, I get the error message below (on my host system the build runs fine btw):
FROM stefanscherer/node-windows:10.15.3-windowsservercore-2016
WORKDIR /app
RUN npm install -g #angular/cli#6.2.4
COPY . ./
RUN ng build
#
# Fatal error in , line 0
# API fatal error handler returned after process out of memory on the background thread
#
#
#
#FailureMessage Object: 000000E37E3FA6D0
I tried increasing the memory available to the build process with --max_old_space up to 16GB (the entire RAM of my laptop) but that didn't help. I also contacted the author of the base image to find out if that's the issue but as this doesn't seem to be reproducable with a smaller example application, that wasn't very fruitful either. I'm working on this issue for a week now and I'm seriously out of ideas what could be the reason. So I hope to get a new impulse from here. At least a dircetion I could investigate in.
What I also tried was getting Node.js and Angular installed on a Windowsservercore base image. If someone has an idea how to do that, it could be the solution.
EDIT: I noticed that the error message is the only output I get from the build process, it doesn't even get to try building the modules. Maybe that means something...
Alright, I figured it out. Although the official Docker documentation states, that Docker has unlimited access to resources, it seems that you need to use the -m option when your build process exceeds a certain amount of memory.
Edit: This question seems to be getting some views so maybe I should clarify this answer a bit. The root of the problem seems to be that under Windows, Docker runs inside a Hyper-V VM. So when the documentation talks about "unlimited access to resources", it doesn't mean your PC's resources, but instead the resources of that VM.

Docker : How to run grunt-open?

I have been using a grunt-open package for open my browser when i build my project. Recently I begin to use docker and this works perfectly, But the grunt-open task don't works anymore.
Exist some way to create a bridge between my docker and my local machine for opens my browser using grunt-open?
There is no way to open an external browser if you are running or building your project inside a docker container. The idea of using docker is to have all the tools you need inside the container.
You can use an gui less browser like PhantomJS and run grunt-open task inside the docker container.
There is no "automatic" way - you would need to have some kind of listener on your local machine. So you can't really use grunt-open from the container but there are any number of ways you could have the grunt task in the container send a call to your local machine which could use grunt-open (or npm-open which it's a wrapper for, or opn which npm-open is a wrapper for) -- or a simple shell script.

Resources