Creating a local directory to edit code after pulling new code - linux

I have been stuck trying to figure out how to edit a python flask code after pulling from a Docker Hub repository on a different computer. I want to create a Folder in my Linux Desktop that contains all of the packages the image has when running as a container (Dockerfile, requirements.txt, app.py) that way I can edit the app.py regardless of what computer I have or even if my classmates want to edit it they can simply just pull my image, run the container, and be able to have a copy of the code saved on their local machine for them to open it using Visual Studio Code (or any IDE) and edit it. This is what I tried.
I first pulled from the Docker hub:
sudo docker pull woonx/dockertester1
Then used this command to run the image as a container and create a directory:
sudo docker run --name=test1 -v ~/testfile:/var/lib/docker -p 4000:80 woonx/dockertester1
I was able to create a local directory called testfile but it was an empty folder when I opened it. No app.py, dockerfile, nothing.
The example code I am using to test is from following the example guide on the Docker website: https://docs.docker.com/get-started/part2/
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
requirements.txt:
Flask
Redis
app.py:
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

What I do is;
First, I issue docker run command.
sudo docker run --name=test1 -v ~/testfile:/var/lib/docker -p 4000:80 woonx/dockertester1
At this stage, files are created in container. Then I stop the container (lets say container id is 0101010101) .
docker container stop 0101010101
What I do is simply copying those files from container to the appropriate directory on my machine by using :
docker cp <container_name>:/path/in/container /path/of/host
or
cd ~/testfile
docker cp <container_name>:/path/in/container .
So, You have the files craeted by docker run on you local host. Now you can use them with -v option.
sudo docker run --name=test1 -v ~/testfile:/var/lib/docker -p 4000:80 woonx/dockertester1
Normally, when you change a setting in your configuration, it should be enough to stop/start container to take in action.
I hope this approach solves your problem.
Regards

Related

Permission denied while running a script file (.sh) with a docker file

What I'm trying to do is,
I created a docker file using Ubuntu as the base image,
ARG VERSION
FROM ubuntu:18.04
COPY configs/base.properties /root/base.properties
this property file contains some configs and the Dockerfile just copies that into the container. (let's assume I tag this image as configimage:1.0)
And then I created a second docker file which uses the above docker image as the base image. (let's assume I tag this as midbase:1.0)
FROM configimage:1.0
COPY resource/bootstrap.sh .
RUN ["chmod", "a+x", "bootstrap.sh"]
CMD ["ls"]
all the script file does is copying the configs I copied in the previous image to a separate location in the container.
#!/bin/bash
mkdir -p ~/configs
cp --archive ~/root/base.properties ~/configs/base.properties
echo "Configs copied"
I added the ls command to check that the folder is created in the container (In midbase:1.0), but I noticed that the configs folder is not created in the midbase:1.0 container.
Do you have any tips to solve this issue?
Try the following
RUN ["sh", "bootstrap.sh"]

How to use python file path inside docker file and write python program's logs to /var/log file on ubuntu?

I am new to docker stuff. My docker files content:
WORKDIR /home/uadmin/queuemgr/mongo_db_cache
COPY . /home/uadmin/queuemgr/mongo_db_cache
#CMD ["pwd"]
#CMD ["ls"]
ENTRYPOINT ["/usr/bin/python3"]
CMD ["/home/uadmin/queuemgr/mongo_db_cache/publisher.py"]
Command used to run docker image is(keeping the path from where I am running):
uadmin#br0ubmsmqtest:~/msw/queuemgr/mongo_db_cache/docker_inputs$ sudo docker build -t queuepublisher:latest . -f publisher_docker_file
Command used to run docker container is(keeping the path from where I am running):
uadmin#br0ubmsmqtest:~/msw/queuemgr/mongo_db_cache/docker_inputs$ sudo docker container run -v /var/log:/log --rm --name publisher queuepublisher
Eventhough CMD ["pwd"] prints "/home/uadmin/msw/queuemgr/mongo_db_cache" as current directory, the command CMD ["ls"] lists files from /msw/queuemgr/mongo_db_cache/docker_inputs instead of files from /home/uadmin/queuemgr/mongo_db_cache . Because of this the last line CMD ["/home/uadmin/queuemgr/mongo_db_cache/publisher.py"] results in error :
/usr/bin/python3: can't open file '/home/uadmin/queuemgr/mongo_db_cache/publisher.py': [Errno 2] No such file or directory
I am creating log file inside /va/log/ folder form publisher.py using python logging module. Here is the code snippet of publisher.py:
import logging
logging.basicConfig(filename='/var/log/example.log', level=logging.DEBUG)
logging.info('This is publisher.py')
When I run it using docker, it should write log messages from publisher.py to /var/log/ folder. So only I am sending "-v /var/log:/log" while running docker container.
But it is not at all creating log file under /var/log folder using docker container.
I learnt some how that we can pass multiple volumes. The below command did the magic for me:
sudo docker container run -v /home/uadmin/msw/queuemgr:/queuemgr -v /var/log:/var/log --rm --name publisher queuepublisher
I could run the container with publisher.py successfully. At the same time logging to /var/log folder from publsher.py is successful!

How to read a file from a cloud storage bucket via a python app in a local Docker container

Let me preface this with the fact that I am fairly new to Docker, Jenkins, GCP/Cloud Storage and Python.
Basically, I would like to write a Python app, that runs locally in a Docker container (alpine3.7 image) and reads chunks, line by line, from a very large text file that is dropped into a GCP cloud storage bucket. Each line should just be output to the console for now.
I learn best by looking at working code, I am spinning my wheels trying to put all the pieces together using these technologies (new to me).
I already have the key file for that cloud storage bucket on my local machine.
I am also aware of these posts:
How to Read .json file in python code from google cloud storage bucket.
Lazy Method for Reading Big File in Python?
I just need some help putting all these pieces together into a working app.
I understand that I need to set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the key file in the container. However, I don't know how to do that in a way that works well for multiple developers and multiple environments (Local, Dev, Stage and Prod).
This is just a simple quickstart (I am sure it can be done better) to read a file from a Google Cloud Storage bucket via a python app (Docker container deployed to Google Cloud Run):
You can find more information here link
Create a directory with the following files:
a. app.py
import os
from flask import Flask
from google.cloud import storage
app = Flask(__name__)
#app.route('/')
def hello_world():
storage_client = storage.Client()
file_data = 'file_data'
bucket_name = 'bucket'
temp_file_name = 'temp_file_name'
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.get_blob(file_data)
blob.download_to_filename(temp_file_name)
temp_str=''
with open (temp_file_name, "r") as myfile:
temp_str = myfile.read().replace('\n', '')
return temp_str
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
b. Dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory fo /app
WORKDIR /app
# Copy the current directory contents into the container /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
RUN pip install google-cloud-storage
# Make port 80 available to the world outside the container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
c. requirements.txt
Flask==1.1.1
gunicorn==19.9.0
google-cloud-storage==1.19.1
Create a service account to access the storage form Cloud Run:
gcloud iam service-accounts create cloudrun --description 'cloudrun'
Set the permission of the service account:
gcloud projects add-iam-policy-binding wave25-vladoi --member serviceAccount:cloud-run#project.iam.gserviceaccount.com --role roles/storage.admin
Build the container image:
gcloud builds submit --tag gcr.io/project/hello
Deploy the application to Cloud Run:
gcloud run deploy --image gcr.io/project/hello --platform managed ----service-account cloud-run#project.iam.gserviceaccount.com
EDIT :
One way to develop locally is :
Your Dev Opp Team will get the service account key.json:
gcloud iam service-accounts keys create ~/key.json --iam-account serviceAccount:cloudrun#project.iam.gserviceaccount.com
Store the key.json file in the same working directory
The Dockerfile command `COPY . /app ' will copy the file to Docker container
Change the app.py to :
storage.Client.from_service_account_json('key.json')

Subprocess can't find file when executed from a Python file in Docker container

I have created a simple Flask app which I am trying to deploy to Docker.
The basic user interface will load on localhost, but when I execute a command which calls a specific function, it keeps showing:
"Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application."
Looking at Docker logs I can see the problem is that the file cannot be found by the subprocess.popen command:
"FileNotFoundError: [Errno 2] No such file or directory: 'test_2.bat': 'test_2.bat'
172.17.0.1 - - [31/Oct/2019 17:01:55] "POST /login HTTP/1.1" 500"
The file certainly exists in the Docker environment, within the container I can see it listed in the root directory.
I have also tried changing:
item = subprocess.Popen(["test_2.bat", i], shell=False,stdout=subprocess.PIPE)
to:
item = subprocess.Popen(["./test_2.bat", i], shell=False,stdout=subprocess.PIPE)
which generated the alternative error:
"OSError: [Errno 8] Exec format error: './test_2.bat'
172.17.0.1 - - [31/Oct/2019 16:58:54] "POST /login HTTP/1.1" 500"
I have added a shebang to the top of both .py files involved in the Flask app (although I may have done this wrong):
#!/usr/bin/env python3
and this is the Dockerfile:
FROM python:3.6
RUN adduser lighthouse
WORKDIR /home/lighthouse
COPY requirements.txt requirements.txt
# RUN python -m venv venv
RUN pip install -r requirements.txt
RUN pip install gunicorn
COPY templates templates
COPY json_logs_nl json_logs_nl
COPY app.py full_script_manual_with_list.py schema_all.json ./
COPY bq_load_indv_jsons_v3.bat test_2.bat ./
RUN chmod 644 app.py
RUN pip install flask
ENV FLASK_APP app.py
RUN chown -R lighthouse:lighthouse ./
USER lighthouse
# EXPOSE 5000
CMD ["flask", "run", "--host=0.0.0.0"]
I am using Ubuntu and WSL2 to run Docker on a Windows machine without a virtual box. I have no trouble navigating my Windows file system or building Docker images so I think this configuration is not the problem - but just in case.
If anyone has any ideas to help subprocess locate test_2.bat I would be very grateful!
Edit: the app works exactly as expected when executed locally via the command line with "flask run"
If anyone is facing a similar problem, the solution was to put the command directly into the Python script rather than calling it in a separate file. It is split into separate strings to allow the "url" variable to be iteratively updated, as this all occurs within a for loop:
url = str(i)
var_command = "lighthouse " + url + " --quiet --chrome-flags=\" --headless\" --output=json output-path=/home/lighthouse/result.json"
item = subprocess.Popen([var_command], stdout=subprocess.PIPE, shell=True)
item.communicate()
As a side note, if you would like to run Lighthouse within a container you need to install it just as you would to run it on the command line, in a Node container. This container can then communicate with my Python container if both deployed in the same pod via Kubernetes and share a namespace. Here is a Lighthouse container Dockerfile I've used: https://github.com/GoogleChromeLabs/lighthousebot/blob/master/builder/Dockerfile

rsync files from inside a docker container?

We are using Docker for the build/deploy of a NodeJS app. We have a test container that is built by Jenkins, and executes our unit tests. The Dockerfile looks like this:
FROM node:boron
# <snip> some misc unimportant config here
# Run the tests
ENTRYPOINT npm test
I would like to modify this step so that we run npm run test:cov, which runs the unit tests + generates a coverage report HTML file. I've modified the Dockerfile to say:
# Run the tests + generate coverage
ENTRYPOINT npm run test:cov
... which works. Yay!
...But now I'm unsure how to rsync the coverage report ( generated by the above command inside the Dockerfile ) to a remote server.
In Jenkins, the above config is invoked this way:
docker run -t test --rm
which, again, runs the above test and exists the container.
how can I add some additional steps after the entrypoint command executes, to (for example) rsync some results out to a remote server?
I am not a "node" expert, so bear with me on the details.
First of all, you may consider if you need a separate Dockerfile for running the tests. Ideally, you'd want your image to be built, then tested, without modifying the actual image.
Building a test-image that uses your NodeJS app as a base image (FROM my-nodejs-image) could do the trick, but may not be needed if all you have to do is run a different command / entrypoint on the image.
Secondly; stateful data (the coverage report falls into that category) should not be stored inside the container (i.e., not stored on the container's filesystem). You want your containers to be ephemeral, and anything that should live beyond the container's lifecycle (anything that should be preserved after the container itself is gone), should be stored outside of the container; either in a "volume", or in a bind-mounted directory.
Let me start with the "separate Dockerfile" point. Let's say, your NodeJS application Dockerfile looks like this;
FROM node:boron
COPY package.json /usr/src/app/
RUN npm install && npm cache clean
COPY . /usr/src/app
CMD [ "npm", "start" ]
You build your image, and tag it, for example, with the commit it was built from;
docker build -t myapp:$GIT_COMMIT .
Once the image was built succesfully, you want to test it. Probably a quick test to verify it actually "runs". Many ways to do that, perhaps something like;
docker run \
-d \
--rm \
--network=test-network \
--name test-{$GIT_COMMIT} \
myapp:$GIT_COMMIT
And a container to test it actually does something;
docker run --rm --network=test-network my-test-image curl test-{$GIT_COMMIT}
Once tested (and the temporary container removed), you can run your coverage tests, however, instead of writing the coverage report inside the container, write it to a volume or bind-mount. You can override the command to run in the container with docker run;
mkdir -p /coverage-reports/{$GIT_COMMIT}
docker run \
--rm \
--name test-{$GIT_COMMIT}\
-v /coverage-reports/{$GIT_COMMIT}:/usr/src/app/coverage \
myapp:$GIT_COMMIT npm run test:cov
The commands above;
Create a unique local directory to store the test-artifacts (coverage report)
Runs the image you built (and tagged myapp:$GIT_COMMIT)
Bind-mount the /coverage-reports/{$GIT_COMMIT} into the container at usr/src/app/coverage
Runs the coverage tests (which will write to /usr/src/app/coverage if I'm not mistaken - again, not a Node expert)
Removes the container once it exits
After the container exits, the coverage report is stored in /coverage-reports/{$GIT_COMMIT} on the host. You can use your regular tools to rsync those where you want.
As an alternative, you can use a volume plugin to write the results to (e.g.) an s3 bucket, which saves you from having to rsync the results.
Once tests are successful, you can docker tag the image to bump your application's version (e.g. docker tag myapp:1.0.12345), docker push to your registry, and deploy the new version.
Make a script to execute as the entrypoint and put the commands in the script. You pass in args when calling docker run and they get passed to the script.
The docs have an example of the postgres image's script. You can build off that.
Docker Entrypoint Docs

Resources