How to find css/js files on a server for specified application that uses docker - linux

I am using: https://github.com/MLstate/PEPS for mail by installing it on our ubuntu server. It uses Docker containers, I tried to figure out how to access application files like css/js in those containers, but have not been successful. Furthest I got was going to /var/lib/docker/containers/CONTAINERID but once I view contents of the containers they are all the same and css/js files are no where to be seen.

The easiest way to access those files is running an interactive shell in the container. To do that you can run docker run -i -t <CONTAINER_ID> /bin/bash.
Regarding the files, docker images and containers are composed by layers and volumes. The files you are looking for will be located probably in /var/lib/docker/aufs/layers (depending of your layer filesystem), but you can omit accessing to the files directly and get them in a interactive session.

Related

How to develop node.js apps with docker on Windows?

I am developing a nodejs app using windows 10 WSL with remote container in visual studio code.
What are the best practices for Dockerfile and docker-compose.yml at this time?
Since we are in the development phase, we don't want to COPY or ADD the program source code in the Dockerfile (it's not practical to recreate the image every time we change one line).
I use docker compose to bind the folder with the source code on the windows side with volume, but in that case, the source code folder and the set of files from the Docker container will all have Root permission.
In the Docker container, node.js runs as node general user.
For the above reasons, node.js will not have write permission to the folders you bind.
Please let me know how to solve this problem.
I found a way to specify UID or GUID, but I could not specify UID or GID because I am binding from windows.
You can optionally mount Node code using NFS in Docker-compose

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

How do I install Nexus 3 in a Docker container with no internet connection?

I want to install Nexus 3 in a Docker container on CentOS. But my CentOS server with Docker installed on it has no access to the internet. I want to use this command:
Docker pull sonatype/nexus3
Is there a standalone, offline file or group of files to give me what I need?
I have only Windows machines with no Docker installed that can access the internet.
You could try and setup your own Docker registry server on your windows machine and then have your centos server talk to that server to get the files that it needs. This seems like overkill though.
Here is the link to set that up: https://docs.docker.com/registry/deploying/
You could also use something like virtualbox and create a centos server and then setup docker in there on the windows machine. This would allow you to have centos + docker + internet.
Yes, you can save the image to a file and then load it on the server:
Download the image to your workstation with docker pull sonatype/nexus3
Save the image to a tar file with docker save sonatype/nexus3 > nexus3.tar - Docs Save Docs
Transfer the image to the server via USB/LAN/etc
Import the image on the CentOS server with docker load --input nexus3.tar - Docker Load Docs
Docker Save
Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags + versions, or specified repo:tag, for each argument provided.
Docker Load
Loads a tarred repository from a file or the standard input stream. Restores both images and tags.
You will now have the image loaded on your machine. There are probably other ways, but this is the simplest I can think of and involves no 3rd party tools. You can also gzip the file, per the documentation.

What is the file-system of a Docker container? On which file system does an application running inside this container runs on?

Basically, I am running Docker on my Windows 10 machine. I have mounted a windows directory inside this container to access the files on my windows machine, on which a couple of tasks are to be performed.
Which file system runs inside a docker container?
Is it the same as that of the OS on which this container is based? For instance, if I run a container having ubuntu as base OS, will it be the one that the current version of ubuntu (inside this container is running)?
Or is it the one that is running on the docker daemon?
Also, I am running an application inside this container, which accesses files in my windows directory as well as creates a couple of files. Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
So, how does it work? (Different file system inside the docker container and the windows file system; both in conjunctuion?)
Which file system runs inside a docker container?
The one from the docker host (Windows NTFS or Ubuntu FS).
$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /opt/webapp.
If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content.
Once the mount is removed, the content is accessible again.
Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
Yes, and that filesystem is case-sensitive (as illustrated in 18756).

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Resources