dapr uninstall does not clean up all the containers that it created - dapr

On my Windows machine, I successfully initialized dapr as explained here.
But upon uninstalling it, two docker containers continue to run.
Am I missing something? What should I do to completely clean that up? Do I have to manually stop those two containers?

This is mentioned on their dapr cli github page.
https://github.com/dapr/cli
and is in case you use redis or anything else for other purposes.
To remove all, use:
dapr uninstall --all

You can manually stop these 2 containers by running command docker stop the-container-id. In this case, it would be docker stop 492fd54a93ca, docker stop c852e328d489. Given that the 2 running containers are the ones you want to remove, you could also use docker stop docker ps -aq. Hope it helps.

Related

My containers in portainer have disapeared and cannot be redeployed

I'm having issues now with my portainer. I run ubuntu with docker and portainer and I ran the apt-get upgrade and install command through the terminal to update some things. now when I go to the portainer all my containers are gone and when I go to deploy them again I get the:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
At the time the only thing I could think of that maybe had created this issue was went ubuntu said there were updates to be installed so I let it install and then also ran the apt-get commands in terminal:
apt-get upgrade
apt-get install
My ubuntu storage has 80GB free (someone said may be a storage issue)
I was on the portainer slack channel trying to get help from a staff member he had me try "docker ps" which didn't work I had to try "sudo docker ps" which gave me that second error message listed above.https://docs.docker.com/engine/install/linux-postinstall/
permission denied while trying to connect to the docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permissions denied
I also went and tried the docker group add from the docker docs and tried adding the $USER to the docker group. I have done that also, it lets me run the hello-world from the terminal but still doesn't let me deploy on portainer.
I still get the same deploy error from portainer:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
This was from one of the stacks I tried to redeploy since it has the /docker/"ghost"/mysql. Also, doesnt let me redeploy any other stacks tried that too.
Really unsure what to do and how to fix it since it basically now doesn't let me use any of those containers. Any help will be really appreciated, quite on edge right now! Thanks
I was kinda expecting not to have any of these issues. Not event entirely sure how it happened, I'm assuming maybe when I was using the "apt-get" commands. I don't really know myself. I would just like this fixed so I can get my data and containers back up and going on portainer. Yes, I also know portainer and docker are different and portainer is only a utility for docker.
edit: to add to this I have also re-installed portainer and docker not a full docker refresh but the standard one where it keeps some of the files since I dont want to remove some of the directories where some containers keep there configs and data files

How to make multiple independent attachments to the same docker container?

Maybe a trivial question but that's my problem:
I attached to a running docker container, after some use I needed to run a Unit Test and gdb at "the same time".
So I openned another shell tab (konsole tab) and attached again to the same docker container $ docker attach container_name but everything I did echoed in both attachments. If I execute cd /home/user/folder_foo the other tab will "do the same", ended up both konsole tabs in the same folder. Like the same command was echoed to both tabs. Maybe it's a unique user structure and what I isn't even possible.
I really need to do two thing in parallel in the same docker container, how it could be done?
$ docker --version
Docker version 20.10.9, build c2ea9bc
I am using Ubuntu 21.04
Run multiple services in a container
It is generally recommended that you separate areas of concern by using one service per container.
But for development purposes, you can follow mentioned guide.
Additionally, there is a similar answer already provided
You can run docker exec -it <container> bash from multiple terminals to launch several sessions connected to the same container.

Yarn is slow and freezes when run via docker exec

I recently started using docker (desktop version for Windows) for my node project development. I have a docker-compose file with volume configuration to share the project source files between my host machine and docker container.
When I need to install a new mode module, I can't do that on my host machine, of course, because it's Windows and docker is Linux or something, so I run docker exec -it my-service bash to "get into" the docker container and then run yarn add something from inside it. The problem is - yarn runs extremely slow and freezes almost all of the time. The docker container then becomes unresponsive, I cannot cancel the yarn command or stop the container using docker-compose stop. The only way I've found to recover is to restart the whole docker engine. So then, to finally install the new module, after docker engine restarts, I delete the node_modules folder and do the same steps again. This time it's still extremely slow, but it doesn't freeze somehow and actually installs the new module. But after some time, when I need to do that again, it freezes again and I have to delete node_modules again...
I would like to find the reasons why the yarn command is so slow and why it freezes.
I'm new to docker, so maybe my workflow is not correct.
I tried increasing RAM limit for docker engine from 2 GB to 8 GB and CPUs limit from 1 to 8, but it had absolutely no effect on the yarn command behavior.
My project was using file watching with chokidar, so I thought maybe that could cause the problem, but disabling it had no effect either.
I also thought the problem could be the file sharing mechanism between host machine (Windows) and docker container, but if it is the case, I do not know how to fix it. I suppose I then should somehow separate node_modules from the source directory and make them private to docker container, so that they are not shared with host machine.
This is quite a severe problem, as it slows the development down a lot. Please share any of your ideas about what could be wrong. I would even consider changing my development environment to Linux if the problem was caused by the file sharing mechanism between Windows and docker container.

Is there a way to enable kubernetes on docker via command line on linux?

I am using docker and trying to enable kubernetes and set CPU and Memory via command line.
I have looked at this answer but unfortunately cannot find this file.
Is there any way to enable Kubernetes on Docker for Mac via terminal?
Docker does not have an app-ified version for Linux that I know of, so there is no relation to the Docker for Mac/Windows app. There are many tools to locally install Kubernetes on Linux so they probably didn't see much reason to make something new. Minikube is the traditional one, but you can also check out microk8s, k3s, KinD, and many others.

Unable to install Docker on Azure VM

I have an Azure VM on which I am trying to install docker. The installation proceeds smoothly. When I try to run the hello world example of docker, I get this error docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
This is the procedure I followed. I have run the docker with sudo. I can't figure out what is causing the problem. Any helps on figuring out this would be much appreciated. I have scoured the internet on fixing this issue. Nothing has worked. I have uninstalled docker completely, and reinstalled it again. Nothing seems to work.
EDIT: I have narrowed down the problem to the fact that the daemon has to be started manually. How do I ensure the daemon starts running as soon as the machine is up or docker is started? Running sudo dockerd and then running docker run hello-world seems to work.
It looks like you are trying to run docker commands as a non-root user.
To achieve that you have to add your user to the docker group, but bear in mind that this can be a security risk, as this group grants root equivalent privileges.
You can find the detailed configuration steps in the post-installation for Linux and information about the risks in the Docker daemon attack surface description
Seems like you daemon isnt running - which VM did you create? Linux based? if so there are few thing regarding to the daemon you must do in order to make the docker work - You need to configure your "daemon.json" or create one if you dont have - Here's the docker documentation that might help you with it -
https://docs.docker.com/config/daemon/
Best of luck!

Resources