$(pwd) - one level up - linux

I'm pretty new to the bash/shell script world, I'm trying to do the below and it could be pretty simple but I wasn't able to figure out the command, would be great if someone could help me out here and also point me to some documentation wrt to shell script topics. Thank you in advance.
My build.sh and Dockerfile resides under a folder called .settings and this folder lives directly under the app root. Now inside my build.sh and Dockerfile when I refer something like $(pwd) or COPY . /apps/ it might not work since my build.sh and Dockerfile does not live directly under the app root.
What command I can use in this scenario inside the files that I referenced above. Hope I made it clear. Once again this could be very simple since I'm a newbie in this arena I find it a little difficult.
inside build.sh, reference to $(pwd)
docker run \
-u root \
--rm \
-v $(pwd):/app \ ----> this $(pwd) references the application root, but if I
move this build.sh inside a folder called .settings then the $(pwd) context
would change and I still want to refer it to the root.
<MYIMAGE NAME FROM LOCAL REPO>

The last arg to docker build, often something like docker build . is the build context in docker. This directory is sent to the server where the build runs and all COPY and ADD commands are performed using this context. These commands do not run on the client, and docker is a client/server application, so anything not in that context simply doesn't exist for the purpose of building an image.
So in the above example, docker build . the current directory is the build context and if that's run while you're inside of the .settings directory, only those files are part of the build context. Therefore your build.sh script needs to pass a different directory, and also reference where the Dockerfile is inside of that build context. That would look like:
docker build -f .settings/Dockerfile ..
When you do this, all of the COPY and ADD commands will now be relevant to parent directory, so you may need to adjust your Dockerfile to compensate.
For your $(pwd) reference, you can either cd .. before running your docker run command or update the command to look like:
docker run \
-u root \
--rm \
-v $(pwd)/..:/app \
<your image>

Related

create an app in a docker container (confused about tasks order)

I have to build a simple app which reads a text file and process it's content (like remove multiple spaces, process words etc) but my I am confused about the first part of my homework.
"Initialize a git repository in a docker container then implement an app...."
I use Debian, I installed docker and git and I studied about it. From what I read I have to create a Dockerfile which will contain some instructions then I build the image and then run the container, run?
But I am still confused, what is the order of these thigs? Can I go firstly and write the app in Intelij and then to create that Dockerfiler? Or I have to create first the container then to code the app? But how I build the container? I read a lot about this, can you give me some advice? I mention that after every app "task" (read text file, process text etc) I have to execute git add, git commit and git push (if it helps for answer)
If the instruction says to "Initialize a Git repository in a docker container" then you are expected to:
run e.g. a Debian container
if Git is not present install it
initialize the repo
write your app
submit homework
You could:
docker run \
--interactive --tty --rm \
--name=homework \
--volume=${PWD}/homework:/homework \
--workdir=/homework \
debian:buster-slim
This will run a Debian "buster" image as a container and should (!) give you a shell prompt in the container.
A directory /homework in the container will be mapped to your host machine's ${PWD}/homework and you will be in the /homework directory when the container starts. This means that you won't lose your work if you exit the container.
From within the container's prompt:
# pwd
/homework
# git
bash: git: command not found
# apt update && apt install -y git
...
done.
# git
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
<command> [<args>]
# git init
Initialized empty Git repository in /homework/.git/
Notes
If you exit the container, you can rerun the docker run ... command to return to it.
When you exist the container, you can ls -la ${PWD}/homework to see the .git directory (at least) persisted on your host.
Ensure you run it from the same directory where it created ${PWD}/homework. Or revise the --volume=...
I'd recommend an overall workflow of
Build the application, without Docker; then
Package it in a Docker image (if it makes sense to).
You should be able to build the application totally normally. Whatever language you're using to build the application, make sure to use its normal packaging tools. For example, your package.json/Gemfile/requirements.txt/go.mod should list out all of the library dependencies your application needs to run. Run it locally, write appropriate unit tests for it, and generally build something that works.
Once it works, then push it into Docker. You'll need to write a Dockerfile that builds the image. A generic recipe for this is
FROM language-base-image # python:3.9, node:14, ...
WORKDIR /app
COPY dependencies-file . # requirements.txt, package.json, ...
RUN install the dependencies # pip install, npm install, ...
COPY . .
RUN build the application # npm run build, ...
CMD ./the_application # npm run start, ...
You should then be able to docker build an image, and docker run a container from the resulting image. The Docker documentation includes a sample application that runs through this sequence.
Note in particular that the problem task of "read a text file" is substantially harder in Docker than without. You need to use a bind mount to give access to the host filesystem to the container, and then refer to the container-side path. For example,
docker run --rm -v $PWD/data:/data my-image \
./the_application --input /data/file.txt
I would not bother trying to use Docker as my primary development environment, especially for an introductory project. Docker is designed as an isolation system and it's intentionally tricky to work with host files from a container, and vice versa. Especially if you can use a fairly routine programming language that you can easily install with apt-get or brew, and you don't have tricky host-library dependencies, it's substantially easier to do most of your development in an ordinary host build environment use Docker only at a late stage.

How to run a dockerfile?

Found a dockerfile that want to create image and run:
https://gist.github.com/matsuu/d5b4e83b3d591441f01b7be2ede774e2
Stored it in a new folder as centos-redhat-8-beta.dockerfile on my computer and tried:
docker build -t centos-redhat-8-beta .
unable to prepare context: unable to evaluate symlinks in Dockerfile path:
lstat /Users/dnk306/docker/centos-redhat-8-beta/Dockerfile: no such file or directory
What is exact command that need to run?
Dockerfile is not an extension, per default the file should be called Dockerfile for the build command to use it.
If you want to use a different name, though, the option -f or flag --file can help you achieve this.
docker build -t centos-redhat-8-beta -f centos-redhat-8-beta.dockerfile .
From the documentaion:
By default the docker build command will look for a Dockerfile at the root of the build context. The -f, --file, option lets you specify the path to an alternative file to use instead. This is useful in cases where the same set of files are used for multiple builds. The path must be to a file within the build context. If a relative path is specified then it is interpreted as relative to the root of the context.
Source: https://docs.docker.com/engine/reference/commandline/build/#text-files

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

docker image directory does not exist during build

I'm building a simple image from a Dockerfile: (note, pm3 is the name of the folder this Dockerfile lives in)
FROM continuumio/miniconda3
MINTAINER Jordan Miller
ENV PORT=5000
COPY . /opt/repos/
WORKDIR /opt/repos/pm3/
RUN ls -la
RUN python /opt/repos/pm3/lib/acquire_requirements.py
EXPOSE $PORT
ENTRYPOINT ["python","/opt/repos/pm3/src/web/api.py"]
I use docker build -f Dockerfile -t jm/pm3 . to build it. Now I thought this was working great last week, but I made some changes and it broke. so I ran docker system prune to clean everything out. But that didn't fix it so I think it's something wrong with the code.
At any rate, here's the error I get:
Step 7/9 : RUN python /opt/repos/pm3/lib/acquire_requirements.py
---> Running in f842a282a6a0
Invalid requirement: '/opt/repos/pm3/lib/acquire_requirements.py'
File '/opt/repos/pm3/lib/acquire_requirements.py' does not exist.
But it really is there, in my windows machine there's a lib folder in the pm3 folder and there is a acquire_requirements.py in the lib folder. should I not include the entire path to it on the linux box or something?
I included that line RUN ls -la after it gave me the error because I wanted to see if it copied over the folder correctly. but the output of that didn't show me anything had copied over, it showed an empty file. so I don't understand really what's going on. If the working directory really is /opt/repos/pm3 then shouldn't I see src when I run ls?
I'm hoping there's something obvious about linux or docker that I'm missing here. any ideas?
so I discovered by creating an image without the RUN python /opt/repos/pm3/lib/acquire_requirements.py that it doens't create a folder after the /opt/repos/ called pm3 like my root directory of the Dockerfile is in.
So I had to add that in manually to the command:
COPY . /opt/repos/ -> COPY . /opt/repos/pm3/

'docker build' gives error that 'docker run' doesn't. How are they different?

My project is setup like this:
./ -
Dockerfile
package.json
build
compiled files from frontend and backend directories get put here
backend
app.js
frontend
frontend files...
scripts
startServer.sh
build.sh
startServer.sh:
docker build ../ --tag myImage
# The build script compiles all my assets and places
# them in the top level 'build' directory which i am
# trying to link to my docker image so I can recompile
# on each file change and have the changes show in the docker image.
./build.sh
docker run --volume /path/to/build/dir:/src/app myImage
Dockerfile:
FROM node:4.4.7
RUN ls src/app
The RUN command in the Dockerfile gives me this error when the build command from the startServer script is called:
ls: cannot access src/app: No such file or directory
If I change RUN to CMD it gives no error. Also, even after the build gives that error, it finishes the build and the docker run command gives no error.
Is the 'docker build' command actually trying to add the 'build' folder to the image from which containers are launched? Or is it just compiling some commands for the images to use when they are made?
If it is the later, how do you make one Dockerfile that is used for both building and running that works in both cases?
I feel like I might be missing a crucial concept with Docker, but I've gone through the tutorials and docs and couldn't solve this.
There is no src/app folder in the node image, so this is an expected error. The node image expects you to add your own /usr/src/app, either with a COPY step in your build, or with a volume mapping after the build is finished.
The RUN gives a step to run to add a layer to the resulting built image, so an ls makes little sense there since you didn't modify the image with new content.
The CMD gives a default command to run if one is not passed at the end of the docker run, so if you do a docker run node /bin/bash, the ls src/app CMD will never be run. This also runs after other steps in your build, and after any volume mounts you may be running on your container, which would create this folder.
When you run the docker image, you mount data volume to /src/app
But in docker script, you tried to access src/app
Because the default working directory is not a root. you cannot access src directory.
so, edit your docker file to
FROM node:4.4.7
RUN ls /src/app

Resources