Docker image fails to run on Raspberry Pi with strange Node error - node.js

I've been working on a project which I deploy via docker to a raspberry pi in my house. At this point, I'm probably ~10 updates into the process so I have already successfully run my project on docker on my RBP.
The pipeline is that I push my code to Github and a github action/workflow builds and pushes the image to Docker Hub. Then I SSH into my Raspberry Pi manually, pull the image from Docker Hub, and then run it.
Everything was working fine until I just made a few changes to the node app running inside the image. When I pull and run the image on the Raspberry Pi, I get a weird Node error... something about getting the time in microseconds?
Node.js[1]: ../src/util.cc:188:double node::GetCurrentTimeInMicroseconds(): Assertion `(0) == (uv_gettimeofday(&tv))' failed.
Note that I have made no changes to the deployment pipeline or process. Nor have I changed anything in the Dockerifle. The "breaking change" was essentially just re-arranging some express routes in the Node app, which I have un-done and re-deployed to Docker but still get the above error.
What's even more strange is that the image runs completely fine on my Macbook. See the image of two terminals, one ssh into the RBP and one on my Macbook. You can see I'm pulling the same image from dockerhub and running it on each machine with very different results. The Macbook terminal even shows an error because I've compiled the image with buildx to run on arm architecture... but it runs my code anyway.
I've searched for the node error a few different ways but I'm not finding anything. I basically have no idea what is going on and its completely stopped my progress. I've tried updating the Pi itself, turning it off/on, uninstall / reinstall docker, remove all docker images (you can see docker image ls as a command in the RBP terminal), and re-pushing my code to trigger another image build.
Any thoughts would be greatly appreciated! Even just how to get more verbose logs when the docker image is booting up. As you can see in the RBP terminal below, it shows the one error and exits.

Have you tried running the docker container with the argument --security-opt seccomp:unconfined?
I got this same error message on my Raspberry Pi. It triggered every time I ran either node or npm on any node image I could find. When I tried to dig deeper to investigate why that uv_gettimeofday(&tv) would fail inside the container I noticed that apt update was broken as well as described here:
https://askubuntu.com/questions/1263284/apt-update-throws-signature-error-in-ubuntu-20-04-container-on-arm
The solution to that issue, applying --security-opt seccomp:unconfined, when running the docker container, solved not just my apt problem but also my node and npm issue as well.
As for the underlying root cause to why seccomp settings would affect uv_gettimeofday, I have no idea.

I run into this problem with docker baseimage node:16.15.1-bullseye-slim, then I fallback to node:16.15.1-buster-slim, it works fine then.
Check updates at https://github.com/nodejs/docker-node/issues/1746

Related

My containers in portainer have disapeared and cannot be redeployed

I'm having issues now with my portainer. I run ubuntu with docker and portainer and I ran the apt-get upgrade and install command through the terminal to update some things. now when I go to the portainer all my containers are gone and when I go to deploy them again I get the:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
At the time the only thing I could think of that maybe had created this issue was went ubuntu said there were updates to be installed so I let it install and then also ran the apt-get commands in terminal:
apt-get upgrade
apt-get install
My ubuntu storage has 80GB free (someone said may be a storage issue)
I was on the portainer slack channel trying to get help from a staff member he had me try "docker ps" which didn't work I had to try "sudo docker ps" which gave me that second error message listed above.https://docs.docker.com/engine/install/linux-postinstall/
permission denied while trying to connect to the docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permissions denied
I also went and tried the docker group add from the docker docs and tried adding the $USER to the docker group. I have done that also, it lets me run the hello-world from the terminal but still doesn't let me deploy on portainer.
I still get the same deploy error from portainer:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
This was from one of the stacks I tried to redeploy since it has the /docker/"ghost"/mysql. Also, doesnt let me redeploy any other stacks tried that too.
Really unsure what to do and how to fix it since it basically now doesn't let me use any of those containers. Any help will be really appreciated, quite on edge right now! Thanks
I was kinda expecting not to have any of these issues. Not event entirely sure how it happened, I'm assuming maybe when I was using the "apt-get" commands. I don't really know myself. I would just like this fixed so I can get my data and containers back up and going on portainer. Yes, I also know portainer and docker are different and portainer is only a utility for docker.
edit: to add to this I have also re-installed portainer and docker not a full docker refresh but the standard one where it keeps some of the files since I dont want to remove some of the directories where some containers keep there configs and data files

Yarn is slow and freezes when run via docker exec

I recently started using docker (desktop version for Windows) for my node project development. I have a docker-compose file with volume configuration to share the project source files between my host machine and docker container.
When I need to install a new mode module, I can't do that on my host machine, of course, because it's Windows and docker is Linux or something, so I run docker exec -it my-service bash to "get into" the docker container and then run yarn add something from inside it. The problem is - yarn runs extremely slow and freezes almost all of the time. The docker container then becomes unresponsive, I cannot cancel the yarn command or stop the container using docker-compose stop. The only way I've found to recover is to restart the whole docker engine. So then, to finally install the new module, after docker engine restarts, I delete the node_modules folder and do the same steps again. This time it's still extremely slow, but it doesn't freeze somehow and actually installs the new module. But after some time, when I need to do that again, it freezes again and I have to delete node_modules again...
I would like to find the reasons why the yarn command is so slow and why it freezes.
I'm new to docker, so maybe my workflow is not correct.
I tried increasing RAM limit for docker engine from 2 GB to 8 GB and CPUs limit from 1 to 8, but it had absolutely no effect on the yarn command behavior.
My project was using file watching with chokidar, so I thought maybe that could cause the problem, but disabling it had no effect either.
I also thought the problem could be the file sharing mechanism between host machine (Windows) and docker container, but if it is the case, I do not know how to fix it. I suppose I then should somehow separate node_modules from the source directory and make them private to docker container, so that they are not shared with host machine.
This is quite a severe problem, as it slows the development down a lot. Please share any of your ideas about what could be wrong. I would even consider changing my development environment to Linux if the problem was caused by the file sharing mechanism between Windows and docker container.

Why do tutorials that dockerize Node.js require you to also install Node.js on the host?

What's the point of having Node.js and Vue.js installed on my host and then also getting a Node/Vue image for Docker? Every Vue.js tutorial says to install Node and Vue to the host first and then get the Docker image, is this not redundant?
Examples:
https://morioh.com/p/3021edac7ef1
https://jonathanmh.com/deploying-a-vue-js-single-page-app-including-router-with-docker/
https://mherman.org/blog/dockerizing-a-vue-app/
I'm using a Windows 10 host and was trying to avoid installing Node and Vue to Windows if possible, unless there are particular advantages to doing so, which hopefully someone can enumerate. Otherwise, maybe someone can confirm that it's redundant to also install Node/Vue on the host and state why it's silly and redundant.
Like you say, it is redundant but easier. A container is a running instance of an image, an image that was created (probably) using a Dockerfile with the instructions, so how would you go about doing everything from the container?
Would you add the creation of the app to the Dockerfile or would you connect to the container using bash and run the commands from there? If you connect with bash you'll lost everything once you remove the container. Once your app is created inside your container how would you get it out? I mean you need to write your app's code. You could store you data using docker volumes but that gets complicated depending were you are running Docker. For example on Mac a virtual machine is created for Docker, so to find that data you'll need to connect to the virtual machine...
It is just easier to do all of that from your local machine and use docker to host your app.

ng build returns fatal out of memory exception in Docker

I'm trying to build the frontend of a web application in a Node.js Docker container. As I'm on a Windows PC, I'm very limited in my base Images. I chose this one, as it's the only one on DockerHub with a decent number of downloads. As the application is meant to run in Azure, I'm also limited to Windowsservercore 2016. When I run the following Dockerfile, I get the error message below (on my host system the build runs fine btw):
FROM stefanscherer/node-windows:10.15.3-windowsservercore-2016
WORKDIR /app
RUN npm install -g #angular/cli#6.2.4
COPY . ./
RUN ng build
#
# Fatal error in , line 0
# API fatal error handler returned after process out of memory on the background thread
#
#
#
#FailureMessage Object: 000000E37E3FA6D0
I tried increasing the memory available to the build process with --max_old_space up to 16GB (the entire RAM of my laptop) but that didn't help. I also contacted the author of the base image to find out if that's the issue but as this doesn't seem to be reproducable with a smaller example application, that wasn't very fruitful either. I'm working on this issue for a week now and I'm seriously out of ideas what could be the reason. So I hope to get a new impulse from here. At least a dircetion I could investigate in.
What I also tried was getting Node.js and Angular installed on a Windowsservercore base image. If someone has an idea how to do that, it could be the solution.
EDIT: I noticed that the error message is the only output I get from the build process, it doesn't even get to try building the modules. Maybe that means something...
Alright, I figured it out. Although the official Docker documentation states, that Docker has unlimited access to resources, it seems that you need to use the -m option when your build process exceeds a certain amount of memory.
Edit: This question seems to be getting some views so maybe I should clarify this answer a bit. The root of the problem seems to be that under Windows, Docker runs inside a Hyper-V VM. So when the documentation talks about "unlimited access to resources", it doesn't mean your PC's resources, but instead the resources of that VM.

Deploying Node.js and Node.js application to Raspberry Pi

I have a Node.js application that I want to run on a Raspberry Pi.
And, I'd like to be able to deploy new version of my application as well as new versions of Node.js to that Raspberry Pi remotely.
Basically, something such as:
$ pi-update 192.168.0.37 node#0.11.4
$ pi-update 192.168.0.37 my-app#latest
I don't have any preferences on how to transfer my app to the Pi, may it be pushing or pulling. I don't care (although I should add that the code for the application is available from a private GitHub repository).
Additionally, once Node.js and / or my app were deployed, I want the potentially running Node.js app to restart.
How could I do this? Which software should I look into? Is this something that can easily be done using tools from Raspbian, or should I look for 3rd party software (devops tools, such as Chef & co.), or ...?
Any help is greatly appreciated :-)
a) For running the script continuously, you can use tools like forever or pm2, otherwise you can also make the app a debian daemon on raspian you can run with sudo <servicename> start (if you're running Arch Linux, this is handled differently I guess).
b) If your Raspberry is reachable from the internet, you can use a GitHub hook (API Documentation) to run every time you push a change to your repository. This hook is basically a URL endpoint on your Pi that runs a little shell script locally.
This script should shutdown you app gracefully, do a git pull for your repository and start the app/service again. You could also trigger this shell script over SSH from your local machine, e.g. ssh pi#192.168.0.37 /path/to/your/script
A update script could look like this:
# change the 'service' command to your script runner of choice
service <yourapp> stop
cd /path/to/your/app
git pull
service <yourapp> start
c) The problem with remote updating Node itself is, that the official binary builds for Raspberry Pi appear only very irregularly, otherwise it would be easy to just download/update the binaries with wget or curl. So most of the time you either need to cross compile Node on your own machine or spend about two hours to recompile it on your Pi. If you want to go with the unofficial builds on GitHub, you can install them with curl -# -L https://gist.github.com/raw/3245130/v0.10.17/node-v0.10.17-linux-arm-armv6j-vfp-hard.tar.gz | tar xzvf - --strip-components=1 -C /usr/local but you need to check the file name for every release.
Look no further than resin.io All you need to is flush your rpi with their image and then git push your project. resin.io will compile its code and dependencies for your device's architecture and send the result to your device(s) (in a docker file).
You can create a very simple continuous integration scheme using supervisor, which does two things:
keeps your process running even if it fails,
and restarts your process if any of the files changes.
It becomes a simple issue to keep your app updated: you just have to run the commands git pull; npm install: when code is downloaded (or even node modules change) supervisor will will restart the app automatically for you.
If the Raspberry Pi is visible from the internet you can use a GitHub webhook, pointing it to a very simple page that runs the commands git pull; npm install using child_process.exec(). (One important note: use a non-trivial URL (with a code or something) so that people don't run into it by mistake.) Otherwise just run those commands from the crontab every hour or so, for instance.
As for updating node.js itself, I would use the official Debian package, either from testing or getting it from unstable. Otherwise you would have to create a private repo to host your own packages, which probably is not worth the hassle; but is doable.

Resources