Docker Image Protection - linux

I have created one simple docker image of python code and it's working fine.
But I can able to copy the content of the file from docker to local machine is there any way to stop this and also how to secure docker image.

So to protect the Docker image created for the Python Project I converted .py file to .pyd for windows and .so file for linux. Which makes it difficult to reverse engineering on the source code. User can export the project files but can not see the actual code.

Related

How to develop node.js apps with docker on Windows?

I am developing a nodejs app using windows 10 WSL with remote container in visual studio code.
What are the best practices for Dockerfile and docker-compose.yml at this time?
Since we are in the development phase, we don't want to COPY or ADD the program source code in the Dockerfile (it's not practical to recreate the image every time we change one line).
I use docker compose to bind the folder with the source code on the windows side with volume, but in that case, the source code folder and the set of files from the Docker container will all have Root permission.
In the Docker container, node.js runs as node general user.
For the above reasons, node.js will not have write permission to the folders you bind.
Please let me know how to solve this problem.
I found a way to specify UID or GUID, but I could not specify UID or GID because I am binding from windows.
You can optionally mount Node code using NFS in Docker-compose

how docker works if the OS environment changes?

I am very new to docker. I have some very basic doubts on docker. suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created. Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above? because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
I know this is very simple question to ask, still any help is highly appreciable.Thanks.
suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created.
Just to be sure that this is clear. You have to consider the "Docker Image" as "a recipe" and a "Docker Container" as "a cake". You can make as many cakes as you like with a given recipe. The recipe is what you share if you want to be able to re-bake cakes.
Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above?
And thus it's the "image" that you will "share" with other developers and not the container. This can be done either by "pushing" the image to an online repository (e.g. https://hub.docker.com/) or by recreating the image every time from a Dockerfile.
because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
I would have to see what exactly you are doing, but it's good practice to make docker images independent from the host. Or at least make it configurable during the first creation of the image or execution of the container.
To give a concrete example, in our company we have a private rest api written in PHP. Everything runs on docker, whether it is on development or on production. Our production images can be run on any OS, however our dev image will be built slightly differently depending on the OS. Why? Because we need to configure the debugger.
If the image is built on Linux, the php setting xdebug.remote_host needs to point to localhost, however when using Docker For Mac, then the php setting needs to be docker.for.mac.localhost .
The Dockerfile looks partially like this:
FROM adsdaq/print-engine-fpm:7.3
ARG DOCKER_HOST_ADDR
ENV PHP_XDEBUG_REMOTE_HOST ${DOCKER_HOST_ADDR:-localhost}
COPY etc/dev/php/adsdaq.ini $PHP_INI_DIR/conf.d/
And in the adsdaq.ini we have
xdebug.remote_host = ${PHP_XDEBUG_REMOTE_HOST}
And to simplify the life of our devs, we have a Makefile which handles OS detection:
DOCKER_HOST ?= localhost
OPEN_BROWSER ?= open
UNAME_S := $(shell uname -s)
USERID=$(shell id -u)
GROUPID=$(shell id -g)
## Define variable depending on OS used, use xdg-open command
ifeq ($(UNAME_S),Linux)
OPEN_BROWSER = xdg-open
else ifeq ($(UNAME_S),Darwin)
ifneq (,$(wildcard /var/run/docker.sock))
DOCKER_HOST = docker.for.mac.localhost
endif
else
$(warning Your OS "$(UNAME_S)" is not supported and could not work as expected!)
endif
As showed here, the image will be built differently on Linux than on Mac OS for dev purposes, and that is fine as we don't need to push those images on any repo manager.
If you need to share the image on a repo, then I would make sure that the configuration can be changed dynamically through parameters and/or eventually the usage of an entrypoint script.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
You cannot see the files/folders of the docker image. To see what's in the image you need to run a container as it will give you an instance of it! Remember the cake/recipe analogy .. you cannot see the content of the cake until you baked it using the recipe.
You can however see all images "stored" on your machine by doing docker images .
Hope this helps figuring things out. Don't hesitate to share your Dockerfile if you need more assistance.
You don't need to change anything. Lets say your current host is Linux on which you are running docker and you made a dockerfile. Now lets say you went to another PC which is running on windows. If docker is running on this windows and you want to build/run or whatever using your image file then you can do that without changing anything.
For more detailed/specific answer you will have to post the dockerfile

How to connect two containers when one of the container is mount to the filesystem?

Container 1 : I have a docker container which is spawned by a Dockerfile image that i built serving Python dependencies to a specific use-case. This container is mount to my file system project directory to /source directory in the Docker container.
Container 2 : Another container contains Jupyter notebook and it is only configured to use Jupyter notebook.
I don't use Jupyter notebook all the time for Python developement, i use my code editor instead. I just want an easier way to mount Jupyter notebook when i want to and edit my project files.
Now, can i mount Container-2 to Container-1 such that contents in the /source directory in Container-1 is persisted to my project directory while enabling Container-1 and Conatiner-2 to take up a shared space? In my case i want Container-1 to be mount to my file system to /source and I want the Jupyter Notebook, running in Container-2 to be able to edit files inside the /source directory in Container-1.
if I'm understanding correctly, what you are trying to do is using one container installed libs in another container
I think that this approach is bad, since there might be some problems with OS level dependencies that are missing. The first possible solution is installing the dependencies in both containers, or at least using the 1st docker image as base to the 2nd
If you yet rather doing it your way, you can try mounting volume between the 1st container virtual env / site-packages with the 2nd

How do I install Nexus 3 in a Docker container with no internet connection?

I want to install Nexus 3 in a Docker container on CentOS. But my CentOS server with Docker installed on it has no access to the internet. I want to use this command:
Docker pull sonatype/nexus3
Is there a standalone, offline file or group of files to give me what I need?
I have only Windows machines with no Docker installed that can access the internet.
You could try and setup your own Docker registry server on your windows machine and then have your centos server talk to that server to get the files that it needs. This seems like overkill though.
Here is the link to set that up: https://docs.docker.com/registry/deploying/
You could also use something like virtualbox and create a centos server and then setup docker in there on the windows machine. This would allow you to have centos + docker + internet.
Yes, you can save the image to a file and then load it on the server:
Download the image to your workstation with docker pull sonatype/nexus3
Save the image to a tar file with docker save sonatype/nexus3 > nexus3.tar - Docs Save Docs
Transfer the image to the server via USB/LAN/etc
Import the image on the CentOS server with docker load --input nexus3.tar - Docker Load Docs
Docker Save
Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags + versions, or specified repo:tag, for each argument provided.
Docker Load
Loads a tarred repository from a file or the standard input stream. Restores both images and tags.
You will now have the image loaded on your machine. There are probably other ways, but this is the simplest I can think of and involves no 3rd party tools. You can also gzip the file, per the documentation.

Creating a Docker container with runnable android tool

I have a Linux machine where I successfully built the Android Open Source Project (AOSP).
One of the products of this build is grxmlcompile (in /aosp/out/host/Linux-x86/bin), which is the tool I need.
I would like to create a Docker container that has a runnable version of grxmlcompile.
I'm a beginner Docker user, and I can't seem to make it work.
What is the right way to do it? Do I need to compile it in the container?
UPDATE:
I have created a container from the ubuntu:14.04 image and struggled with getting the file into it. Finally used the advice here, but couldn't run the file. I'm not sure if it's a permission issue, or the file is not runnable.

Resources