I got a Docker environment that contain Linux Containers,
I am running on one of those pods a program that try to access a windows server under a secure domain(my own),
How can i tell the container to run from a specific user , or vice versa how can i allow access to my Docker environment to my server?
I prefer the first option if possible.
Related
I am currently prototyping Docker hosted on WSL and Ubuntu that will be located on a compliant workstation. Being an early prototype, we want it setup heavily restricted to side step compliancy.
Now a piece of the puzzle is being able to restrict users to only a few commands that will allow them to accomplish their job. For example, can I use Unix permissions to restrict Docker commands such as: docker network create and flags such as --privileged, --mount, etc? The goal here is to deploy a specific configuration and ensure that it cannot be changed by non-admin users. Thank you.
We are developing an application using 3 docker containers. The development is done under a windows environnement on which we are getting issues preventing the containers run.
Those issues seems to be related to running the container under windows environnement.
As a workaround we are thinking about is to configure a remote server on intellij that access a linux remote virtual machine.
Please note we access the VM using putty and execute the application the containers runs successfully.
So my question here is it possible to continue the development using that workaround? all we need is having all containers running and launch and visualise application interfaces on Windows
I am newbie with docker, i need some clarification here i am trying to explain
lets say i have Windows Machine and docker desktop installed on it.
what will be the structure may i need to first run Some Linux Distro Container and within that container i will install LAMP Server? or i will parallel create Apache Container MySQL Container and Linux Container?
Secondly i noticed that there are some wordpress containers which is totally confusing because to run wordpress defiantly i need LAMP, then how this architecture will work?
Will it be like:
1 Linux Container and then i will install LAMP on it and install wordpress?
But incase of this what will be the purpose of wordpress container?
Or
1 Linux Container
1 Apache Container
1 MySQL Container
1 Wordpress Container
and all of them will be interlinked??
i am too confused please help me
In general you will try to have 1 container = 1 service/ 1 purpose and keep the containers very small.
That means you will have your MySQL in one container and your Apache Server on another container. They will run on container-linux based (here you can go and read about docker and its layering technique).
Coming back to your architecture, you need to put the Wordpress somewhere, where a server is - because without a server the software has no power to do anything, that means you will put it on the Apache container, eventually you will want to volume (check docker docs) to persist your statical data.
Lastly, you will want to connect this container with the MySQL container to be able to persist the important data there. You can do that with docker-compose (see docs) and start both containers from one command.
Now the cool part: this is already done for you here bitnami/wordpress and I am sure you can find a lot more on docker hub.
I want to provide a minimal CentOS/RedHat VM to a staff member to log into using a non-root user account. I made the docker socket available to the user to run docker 1.12 cli commands via chgrping the socket and adding the account into the docker group.
Assuming we leave the TCP API, and all CaaS/PaaS products out of this question, on a VM, is it possible using SELinux, manipulation of seccomp and/or linux capabilities or anything else (including GRSec/PAX) to prevent the use of Docker containers to access the root user on the Docker host?
This post appears not to turn up a definitive.
If you are exposing your host's Docker socket to the container, then you've essentially given them root privileges to the host.
If you are trying to provide an isolated Docker environment within a container, you should use Docker-in-Docker. See the dind tagged images for the docker image. This is how Play with Docker works.
I am working on an embedded platform, where I have an important application which handles sensitive data. I want to protect this application from other application. For that I came up with containers.
I have set up a container in my Linux PC using LXC. I then run an application in the container. From the container, I can't access or see any application running in the host, but the reverse is possible (I could access the application in container from the host). Is there any way to isolate the container from the host machine? Are there any alternatives.
Is there any way to isolate the container from the host machine?
No sorry. If you want to prevent other applications from accessing the data in the contained application, those other applications must be the one to be contained. The hypervisor will always have full access through all contained applications as it needs to do that to do its job.
If one has access on the Host machine it will be possible to access the containers running in it.
What you could do is have a minimal Host installation, with no services running other than Docker and assign all your other services in container(s), keeping your app container isolated from other services.
There are 2 things you could do. The better way would be to just run your app as a different user and don't give your main account any access to the extra user's folders and files. The second way would be to copy your entire system into a sub-folder and use chroot, but that is pretty difficult to set up and probably overkill.