"Docker context ls" and "sudo docker context ls" don't have same setting options - linux

I am a docker newbie. I just installed Docker and Docker Desktop as per offical instruction. Soon, I start to have problem like: the Docker Desktop does not show container. I think it's because I haven’t set the contexts same for with and without sudo privilege, according to this post.
But I don’t understand why I only have the “default” option for “sudo docker context ls”. Please help me on this. Many thanks!
OS:Ubuntu 20.04.5 LTS
screenshot

The docker context data is stored in the user's home directory. When you use sudo, that changes users and home directories. Without sudo it might look in /home/yourname/.docker/contexts, but when you switch to root with sudo it also changes home directories and looks in /root/.docker/contexts.
You do not need Docker Desktop on native Linux. Installing Docker (what the Docker documentation now calls "Docker Engine") through your OS's package manager is sufficient. If you are on a single-user system, you can grant your ordinary user access to the Docker socket, but be aware that it's all but trivial to use this access to root the entire host.
When you do uninstall Docker Desktop, there are additional files in your home directory you need to remove
rm -rf $HOME/.docker/desktop
$EDITOR $HOME/.docker/config.json
# and remove `credsStore` and `currentContext`
Once you've done this cleanup, you'll revert to Docker's default behavior of using the $DOCKER_SOCK environment variable, or without that, /var/run/docker.sock. That system-global Docker socket file is the same regardless of which user you are, and it won't be affected by sudo.

Related

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

docker: installing a node.js application has issues, since docker runs as root

Set up a docker instance via pull ubuntu and then via base-image/docker, and then successfully installed node.js on top of this.
However, when I attempt to pull the repo of a node.js app that I'm working on, I get to an npm install action and then run into trouble because that action expects NOT to be run as root, and I have instantiated it via
docker run -name="{name}" -t -i {my custom docker container mirroring base-image) /bin/bash
which has logged me in as root. Is there a way to run docker not as root?
Yes -- you'll need to create the other user account inside the container according to whatever your container's Linux distro expects (here is an Ubuntu example).
Once you've got the user account set up, you can use the Dockerfile USER parameter to run the remaining commands in the Dockerfile as that user. Please see the PostgreSQL example for a full use case.
Where did the postgre user come from in that example? Debian packages create any users they need when they are installed. If you would like to create your own user you could add RUN useradd to your Dockerfile. For a full example, you could look at the Jira Dockerfile in this Atlassian Blog
As the operator you can also decide the user account to use at docker runtime, using the -u parameter. This would override the USER chosen in the Dockerfile.

Within lxc/docker container - what happens if apt-get upgrade includes kernel update?

I am reading a lot of Docker guides where the will often use some Ubuntu base image and in the Dockerfile directly or in a bash script that gets copy to container and run on start, it has things like 'apt-get upgrade'
As i understand it, the container still uses the hosts kernel. So what happens when the apt-get upgrade includes a kernel upgrade? Does it create a /boot and install the files as usual but the underlying LXC has some pass-through/whitelist mechanism for specific directories that always come from host... so it ignores those files in guest container ?
Thanks
fLo
The host's /boot is not visible to a Docker container, and the kernel image package should not be installed in such a container, since it's not needed. (Even if it is, though, it's entirely inert.)

XAMPP or any other service tool in /opt? Security

I am developing with Xampp for Linux and Tomcat (similar to Xampp on Windows). Many programs like /IDEA, Tomcat and Xampp are recommended to be installed under /opt Now I have heard that it is not recommended to run services as root, but on Ubuntu (I am using this) unpacking any directory to /opt implies that it belongs to root owner and root group. This may be specific to Xampp as per the instructions on their Linux page:
Step 2: Installation After downloading simply type in the following commands:
Go to a Linux shell and login as the system administrator root:
su
Extract the downloaded archive file to /opt:
tar xvfz xampp-linux-1.8.1.tar.gz -C /opt
Warning: Please use only this command to install XAMPP. DON'T use any Microsoft Windows tools to extract the archive, it won't work.
Warning 2: already installed XAMPP versions get overwritten by this command.
That's all. XAMPP is now installed below the /opt/lampp directory.
* Step 3: Start To start XAMPP simply call this command:
/opt/lampp/lampp start
Placing it here implies that Apache must be run as root as one is only able to run it with sudo on Ubuntu.
This may be an issue specific to Ubuntu. Is it? Because Xampp is a development tool I posted this here as I am more likely to find an appropriate answer here from developers who use it on Ubuntu (and other Linux systems). I would appreciate any information on if the same problem occurs on other systems, I notice my production environment has Tomcat installed in /opt too, but belongs to tomcat: tomcat
The question here is how to get around this for all tools under /opt, because even though Xampp may not be the tool for my needs, I still want to place Tomcat under /opt to replicate my production environment and the same thing will surely happen unless this is just a Ubuntu issue?
Ubuntu and some other distributions differ to the general Linux principle where the account that you create upon install of the OS is added to specific groups that can be viewed with the following command:
groups username
You will notice that root is not amongst these. It is also not possible to log in or su to the root account. sudo is most likey a command that has been granted permission to be used from other accounts so I imagine the 'sudo' command has a file permission of 775 for user: root:root
Thus launching services from /opt' does not run them asroot`

Resources