Two Git Repos, One Docker Image - node.js

I am still relatively new to docker. I have two git repos. One is a Next.js application and the other a nodejs app. I need to create a docker container but when building, I need to build the next.js code and move the build folder to the node app before creating an image. Not sure if this is possible.
I am not sure if this is the best route to take either. The end goal is to push the docker containers to AWS ECS.
Background, the next.js is a server rendered react framework. So in QA and PROD the node app serves the content.

By issuing RUN directives, executed commands are committed in new layers on top of the current image. The concept behind Docker is to keep your convergence steps under source control as so containers can be created from any point in time in an image's history. In this scenario, using a set of RUN instructions will commit each step as an individual layer.
WORKDIR /project
RUN git clone https://github.com/foo/next-js.git
RUN git clone https://github.com/baz/nodejs.git
WORKDIR /project/next-js
RUN npm run build
RUN cp ./build ../nodejs/
You could instead incorperate the above into a bash script and bypass Docker's layering mechanism.
COPY ./setup.sh /
RUN chmod u+x /setup.sh
RUN /setup.sh
However, doing so would defeat the purpose of using Docker to begin with, unless for some reason you need to ensure your container receives a set of instructions in the form of one layer.

Related

GitHub Actions: How to use Docker image from GitHub registry in a Docker action?

I recently created my first Docker container action, and it works perfectly as intended. The Dockerfile for this action uses python:3.9-slim. It looks like this:
FROM python:3.9-slim
COPY repo ./repo
COPY scripts ./scripts
COPY requirements.txt setup.py ./
COPY entrypoint.sh /entrypoint.sh
RUN python setup.py develop
ENTRYPOINT ["/entrypoint.sh"]
I'm now thinking of publishing the Docker image from the above Dockerfile on Docker/GitHub registry and using it in the Docker action. Essentially the idea is to simplify the image build process, so the action can run faster.
I have a couple of questions related to doing this, #1 is the question in the title:
I found this page which explains how to have a workflow to publish the Docker image. If I publish to GitHub registry, how can I go about using it? For images on Docker, that seems straightforward.
With this Docker container action, I want to make sure that it uses the version specified in uses: username/action-repo#v???. Thus, I think it would make sense to have the FROM in this new Dockerfile configured for the specific image tag that will be used. Is that the best way to go about it?

create an app in a docker container (confused about tasks order)

I have to build a simple app which reads a text file and process it's content (like remove multiple spaces, process words etc) but my I am confused about the first part of my homework.
"Initialize a git repository in a docker container then implement an app...."
I use Debian, I installed docker and git and I studied about it. From what I read I have to create a Dockerfile which will contain some instructions then I build the image and then run the container, run?
But I am still confused, what is the order of these thigs? Can I go firstly and write the app in Intelij and then to create that Dockerfiler? Or I have to create first the container then to code the app? But how I build the container? I read a lot about this, can you give me some advice? I mention that after every app "task" (read text file, process text etc) I have to execute git add, git commit and git push (if it helps for answer)
If the instruction says to "Initialize a Git repository in a docker container" then you are expected to:
run e.g. a Debian container
if Git is not present install it
initialize the repo
write your app
submit homework
You could:
docker run \
--interactive --tty --rm \
--name=homework \
--volume=${PWD}/homework:/homework \
--workdir=/homework \
debian:buster-slim
This will run a Debian "buster" image as a container and should (!) give you a shell prompt in the container.
A directory /homework in the container will be mapped to your host machine's ${PWD}/homework and you will be in the /homework directory when the container starts. This means that you won't lose your work if you exit the container.
From within the container's prompt:
# pwd
/homework
# git
bash: git: command not found
# apt update && apt install -y git
...
done.
# git
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
<command> [<args>]
# git init
Initialized empty Git repository in /homework/.git/
Notes
If you exit the container, you can rerun the docker run ... command to return to it.
When you exist the container, you can ls -la ${PWD}/homework to see the .git directory (at least) persisted on your host.
Ensure you run it from the same directory where it created ${PWD}/homework. Or revise the --volume=...
I'd recommend an overall workflow of
Build the application, without Docker; then
Package it in a Docker image (if it makes sense to).
You should be able to build the application totally normally. Whatever language you're using to build the application, make sure to use its normal packaging tools. For example, your package.json/Gemfile/requirements.txt/go.mod should list out all of the library dependencies your application needs to run. Run it locally, write appropriate unit tests for it, and generally build something that works.
Once it works, then push it into Docker. You'll need to write a Dockerfile that builds the image. A generic recipe for this is
FROM language-base-image # python:3.9, node:14, ...
WORKDIR /app
COPY dependencies-file . # requirements.txt, package.json, ...
RUN install the dependencies # pip install, npm install, ...
COPY . .
RUN build the application # npm run build, ...
CMD ./the_application # npm run start, ...
You should then be able to docker build an image, and docker run a container from the resulting image. The Docker documentation includes a sample application that runs through this sequence.
Note in particular that the problem task of "read a text file" is substantially harder in Docker than without. You need to use a bind mount to give access to the host filesystem to the container, and then refer to the container-side path. For example,
docker run --rm -v $PWD/data:/data my-image \
./the_application --input /data/file.txt
I would not bother trying to use Docker as my primary development environment, especially for an introductory project. Docker is designed as an isolation system and it's intentionally tricky to work with host files from a container, and vice versa. Especially if you can use a fairly routine programming language that you can easily install with apt-get or brew, and you don't have tricky host-library dependencies, it's substantially easier to do most of your development in an ordinary host build environment use Docker only at a late stage.

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

docker: installing a node.js application has issues, since docker runs as root

Set up a docker instance via pull ubuntu and then via base-image/docker, and then successfully installed node.js on top of this.
However, when I attempt to pull the repo of a node.js app that I'm working on, I get to an npm install action and then run into trouble because that action expects NOT to be run as root, and I have instantiated it via
docker run -name="{name}" -t -i {my custom docker container mirroring base-image) /bin/bash
which has logged me in as root. Is there a way to run docker not as root?
Yes -- you'll need to create the other user account inside the container according to whatever your container's Linux distro expects (here is an Ubuntu example).
Once you've got the user account set up, you can use the Dockerfile USER parameter to run the remaining commands in the Dockerfile as that user. Please see the PostgreSQL example for a full use case.
Where did the postgre user come from in that example? Debian packages create any users they need when they are installed. If you would like to create your own user you could add RUN useradd to your Dockerfile. For a full example, you could look at the Jira Dockerfile in this Atlassian Blog
As the operator you can also decide the user account to use at docker runtime, using the -u parameter. This would override the USER chosen in the Dockerfile.

Resources