I'm having issues now with my portainer. I run ubuntu with docker and portainer and I ran the apt-get upgrade and install command through the terminal to update some things. now when I go to the portainer all my containers are gone and when I go to deploy them again I get the:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
At the time the only thing I could think of that maybe had created this issue was went ubuntu said there were updates to be installed so I let it install and then also ran the apt-get commands in terminal:
apt-get upgrade
apt-get install
My ubuntu storage has 80GB free (someone said may be a storage issue)
I was on the portainer slack channel trying to get help from a staff member he had me try "docker ps" which didn't work I had to try "sudo docker ps" which gave me that second error message listed above.https://docs.docker.com/engine/install/linux-postinstall/
permission denied while trying to connect to the docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permissions denied
I also went and tried the docker group add from the docker docs and tried adding the $USER to the docker group. I have done that also, it lets me run the hello-world from the terminal but still doesn't let me deploy on portainer.
I still get the same deploy error from portainer:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
This was from one of the stacks I tried to redeploy since it has the /docker/"ghost"/mysql. Also, doesnt let me redeploy any other stacks tried that too.
Really unsure what to do and how to fix it since it basically now doesn't let me use any of those containers. Any help will be really appreciated, quite on edge right now! Thanks
I was kinda expecting not to have any of these issues. Not event entirely sure how it happened, I'm assuming maybe when I was using the "apt-get" commands. I don't really know myself. I would just like this fixed so I can get my data and containers back up and going on portainer. Yes, I also know portainer and docker are different and portainer is only a utility for docker.
edit: to add to this I have also re-installed portainer and docker not a full docker refresh but the standard one where it keeps some of the files since I dont want to remove some of the directories where some containers keep there configs and data files
Related
I've been working on a project which I deploy via docker to a raspberry pi in my house. At this point, I'm probably ~10 updates into the process so I have already successfully run my project on docker on my RBP.
The pipeline is that I push my code to Github and a github action/workflow builds and pushes the image to Docker Hub. Then I SSH into my Raspberry Pi manually, pull the image from Docker Hub, and then run it.
Everything was working fine until I just made a few changes to the node app running inside the image. When I pull and run the image on the Raspberry Pi, I get a weird Node error... something about getting the time in microseconds?
Node.js[1]: ../src/util.cc:188:double node::GetCurrentTimeInMicroseconds(): Assertion `(0) == (uv_gettimeofday(&tv))' failed.
Note that I have made no changes to the deployment pipeline or process. Nor have I changed anything in the Dockerifle. The "breaking change" was essentially just re-arranging some express routes in the Node app, which I have un-done and re-deployed to Docker but still get the above error.
What's even more strange is that the image runs completely fine on my Macbook. See the image of two terminals, one ssh into the RBP and one on my Macbook. You can see I'm pulling the same image from dockerhub and running it on each machine with very different results. The Macbook terminal even shows an error because I've compiled the image with buildx to run on arm architecture... but it runs my code anyway.
I've searched for the node error a few different ways but I'm not finding anything. I basically have no idea what is going on and its completely stopped my progress. I've tried updating the Pi itself, turning it off/on, uninstall / reinstall docker, remove all docker images (you can see docker image ls as a command in the RBP terminal), and re-pushing my code to trigger another image build.
Any thoughts would be greatly appreciated! Even just how to get more verbose logs when the docker image is booting up. As you can see in the RBP terminal below, it shows the one error and exits.
Have you tried running the docker container with the argument --security-opt seccomp:unconfined?
I got this same error message on my Raspberry Pi. It triggered every time I ran either node or npm on any node image I could find. When I tried to dig deeper to investigate why that uv_gettimeofday(&tv) would fail inside the container I noticed that apt update was broken as well as described here:
https://askubuntu.com/questions/1263284/apt-update-throws-signature-error-in-ubuntu-20-04-container-on-arm
The solution to that issue, applying --security-opt seccomp:unconfined, when running the docker container, solved not just my apt problem but also my node and npm issue as well.
As for the underlying root cause to why seccomp settings would affect uv_gettimeofday, I have no idea.
I run into this problem with docker baseimage node:16.15.1-bullseye-slim, then I fallback to node:16.15.1-buster-slim, it works fine then.
Check updates at https://github.com/nodejs/docker-node/issues/1746
I’m a fresh beginner on bioinformatics. Recently, I start learning it with the book named “Bioinformatics with Python Cookbook (by Antao, Tiago)”. I met some issues while setting up Docker for Linux. Please see below for the issues:
I was trying to set up the Docker files following the author’s instruction, but I found some files were “failed to download”.
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
Then I still went ahead set up the container following the instruction:
“Now, you are ready to run the container, as follows: docker run -ti -p 9875:9875 -v YOUR_DIRECTORY:/data bio”
I typed as docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
However, it gave me an error saying “Unable to find image “bio:latest” locally”.
Can anyone give me any suggestions on this? My thought could be the first step I missed downloading some files for setting the Dockers, but I am not sure if I can fetch these files.
Thank you so much for any comments!
Best regards
Johnny
I tried downloading the docker files a few time, but the error still appears
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
In the first issue, I found some files were “failed to download”.
In the 2nd issue, an error saying “Unable to find image “bio:latest” locally”. appears
Here you have a couple of problems:
1) It looks you do not download that docker file and build required docker image locally
2) You are getting that error about not finding image locally because of previous problem
So, you should do like this:
1) Download that Dockerfile (https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile). If you cant download that file for some reason, just open it at the git, select all content, copy, than in some folder on your computer make a new file, name it "Dockerfile" and paste the content.
2) Build locally image - go to the folder you download that dockerfile and execute following command:
docker build -t bio .
3)Run your container with docker run ... command
I have an Azure VM on which I am trying to install docker. The installation proceeds smoothly. When I try to run the hello world example of docker, I get this error docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
This is the procedure I followed. I have run the docker with sudo. I can't figure out what is causing the problem. Any helps on figuring out this would be much appreciated. I have scoured the internet on fixing this issue. Nothing has worked. I have uninstalled docker completely, and reinstalled it again. Nothing seems to work.
EDIT: I have narrowed down the problem to the fact that the daemon has to be started manually. How do I ensure the daemon starts running as soon as the machine is up or docker is started? Running sudo dockerd and then running docker run hello-world seems to work.
It looks like you are trying to run docker commands as a non-root user.
To achieve that you have to add your user to the docker group, but bear in mind that this can be a security risk, as this group grants root equivalent privileges.
You can find the detailed configuration steps in the post-installation for Linux and information about the risks in the Docker daemon attack surface description
Seems like you daemon isnt running - which VM did you create? Linux based? if so there are few thing regarding to the daemon you must do in order to make the docker work - You need to configure your "daemon.json" or create one if you dont have - Here's the docker documentation that might help you with it -
https://docs.docker.com/config/daemon/
Best of luck!
I launched an Ubuntu 18.04 VM with Azure. I installed a bunch of stuff that I need. Then, I used the dashboard to create a custom image from this machine. After that, I checked that the image was okay by launching some machines with that image. Everything seemed to be working fine.
Today, I launched a new instance with my custom image. Then I tried to install a few things with apt-get install and I get the following error (e.g. for unzip):
sudo: unable to resolve host ABCDEFG: Resource temporarily unavailable
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package unzip is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'unzip' has no installation candidate
This same thing happens for any package I try to install. After testing some basic things with my repositories, I checked the internet connection with ping. E.g. ping www.google.com which is also not working. I launched a vanilla Ubuntu 18.04 instance and I am not having these problems with that machine.
I have also tried sudo reboot but no luck with that. I did notice that when the system booted it shows the following error, also indicating that something is wrong with the internet:
Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings
Any help is greatly appreciated.
So, after some digging around, I found this answer to something similar: https://askubuntu.com/questions/1045278/ubuntu-server-18-04-temporary-failure-in-name-resolution.
I used the following command and the internet started working again:
sudo ln -s ../run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
This is a little different than the answer on askubuntu because this is on an Azure image. First, I noticed that my image was missing resolv.conf in /etc. Using ls -la /etc/resolv.conf on a different azure image, I saw that it was a symbolic link to ../run/systemd/resolve/stub-resolve.conf, so I created a link that matched this format on my machine and that fixed things.
** EDIT **
It's worth noting that when you deprovision the VM to create the custom image, it does say:
WARNING! The waagent service will be stopped.
WARNING! Cached DHCP leases will be deleted.
WARNING! root password will be disabled. You will not be able to login as root.
WARNING! /etc/resolv.conf will be deleted.
WARNING! xxxx account and entire home directory will be deleted.
while I am running this command from my terminal
sudo ./byfn.sh -m up
I am getting below error:
Starting with channel 'my-channel' and CLI timeout of '10' seconds
and CLI delay of '3' seconds Continue (y/n)? y proceeding ...
Pulling orderer.example.com (hyper-ledger/fabric-orderer:latest)...
ERROR: manifest for hyper-ledger/fabric-orderer:latest not found
ERROR !!!! Unable to start network Error response from daemon: No
such container: cli
How do I resolve this please?
You need to download platform specific binaries, please see how to do it here in the following tutorial. Please also make sure you have all per-requisites, you can find more about what needed here.
Ideally, you should download the platform binaries and images as given in the Fabric documentation - Install Binaries and Docker Images
Or
You should make sure that your terminal has internet access & not behind any corporate proxy. Whatever is needed would be pulled by docker anyway. I am guessing that hyperledger/fabric-baseos image is not pulled by the script above.
If you don't find hyperledger/fabric-baseos:latest,then either docker pull hyperledger/fabric-baseos:tag depends on the fabric version on your own
Or, the chaincode instantiate process in byfn- end to end CLI would do it for you.
I had the same issue. Turns out it was just a broken docker-compose installation. I simply figured it out typing docker-compose in my terminal, and I ran into ImportError: No module named ssl_match_hostname
With a clean docker compose install, I got it to work.