I'm trying to create a TeamCity build agent on docker. i pulled the official image and tried to start it with default configurations with below command
docker run -d --name teamcity-agent -e SERVER_URL="http://teamcity-server-instance:80" -v /opt/docker/teamCity/teamcity_agent/conf:/data/teamcity_agent/conf jetbrains/teamcity-agent
but it exits with code 1 every time i run it and below are the logs for that
can anyone suggest a solution to this
Thank you in advance
Did you try changing the mod? sudo chmod 666 /opt/docker.
This gives the File owner, The group members, and others read + write permission over that directory.
you can check the permissions you have via: ls -la /opt
Related
I am trying to deploy db2 express image to docker using non-root user.
The below code is used to start the db2engine using root user, it works fine.
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
RUN su - db2inst1 -c "db2start"
CMD ["db2start"]
The below code is used to start the db2engine from db2inst1 profile, giving below exception during image build. please help to resolve this.( I am trying to avoid su - command )
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
USER db2inst1
RUN /bin/bash -c ~db2inst1/sqllib/adm/db2start
CMD ["db2start"]
SQL1641N The db2start command failed because one or more DB2 database manager program files was prevented from executing with root privileges by file system mount settings.
Can you show us your Dockerfile please?
It's worth noting that a Dockerfile is used to build an image. You can execute commands while building, but once an image is published, running processses are not maintained in the image definition.
This is the reason that the CMD directive exists, so that you can tell the container which process to start and encapsulate.
If you're using the pre-existing db2 image from IBM on DockerHub (docker pull ibmcom/db2), then you will not need to start the process yourself.
Their quickstart guide demonstrates this with the following example command:
docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=<choose an instance password> -e DBNAME=testdb -v <db storage dir>:/database ibmcom/db2
As you can see, you only specify the image, and leave the default ENTRYPOINT and CMD, resulting in the DB starting.
Their recommendation for building your own container on top of theirs (FROM) is to load all custom scripts into /var/custom, and they will be executed automatically after the main process has started.
Company's PC is Win10 and can't bring self's LP, and I want to develop in Linux, So I'm preparing to install a docker on Windows and run a Linux container on whcih I perform my development.
--- background ---
I've installed a Docker Desktop for Windows(19.03.8) in Win10 and pull a Ubuntu image.
I start the Ubuntu container with -v to mount my win10's host_dir to container's slave_dir.
The host_dir before mount has already been a git repo with a .git directory in host_dir.
Through ssh with root user, I edit the file in slave_dir in container and when I want to commit the changes, the following error appears:
root#5f8d7d02ee70:~/slave_dir# git status
fatal: failed to read object 36fa53e7ecb9d1daa454fc82f7bd7310afa335b7: Operation not permitted
I guess something is wrong with the git Authority between Win10 and my Linux-container
Linux-container's slave_dir:
Win10's host_dir:
And I've got a similar circumstance, in which the blogger said You should run the docker with --user, and the --user's param should be the same with you login on the host
So I tried as follows:
docker run -it --name test --user Leo -p 127.0.0.1:5001 -v host_dir:slave_dir image_name /bin/bash
Unfortunately, the slave_dir's uid and gid are still root.
With cygwin on Win10, I use id to find my login user's uid and gid,
and retry run docker with uid/gid directly.
docker run -it --name test --user 4344296:1049089 -p 127.0.0.1:5001 -v host_dir:slave_dir image_name /bin/bash
OMG, still not work! Still root! ... ...
I'm wondering whether my operation is wrong or window's Docker-Desktop-For-Windows has some tricks with Authority when mounting.
Thanks all!
It looks like a problem with Docker 2.2.0.4. A fix to this problem can be found at this link (It worked for me).
TL;DR: Remove the read-only permission from .git folder in windows.
I am new to Openshift.
I have deployed an application in openshift. When I checked the logs, there is permission denied error for some files.
Now, I want to change the permissions on the the container that is already deployed in Openshift, but I am getting, "Operation not permitted" warning. How do I fix this ?
This is for linux running latest version of MongoDB. I have already tried executing RUN chmod 777 /path/to/directory in my docker file, created the image and pulled the same image in my yaml file, which I am deploying in my openshift. However, when I check my docker container, it shows that the permissions are changed for that directory, but when I deploy, I get the warning in my logs as "permission denied".
FROM node:10.16.3
RUN apt update && apt install -y openjdk-8-jdk
RUN useradd -ms /bin/bash admin
# Set the workdir /var/www/myapp
WORKDIR /var/www/myapp
# Copy the package.json to workdir
COPY package.json .
# Run npm install - install the npm dependencies
RUN npm install
RUN npm install sqlite3
# Copy application source
COPY . .
RUN chown -R admin:admin /var/www/myapp
RUN chmod 775 /var/www/myapp
USER admin
# Copy .env.docker to workdir/.env - use the docker env
#COPY .env.docker ./.env
# Expose application ports - (4300 - for API and 4301 - for front end)
# EXPOSE 4300 4301
EXPOSE 52000
CMD [ "npm", "start" ]
Athough, when I run my dockerifle, the permissions have changed, but when I try to deploy in my openshift, I get permission denied for some files in that directory.
By default any container started in OpenShift gets a random user ID. Therefor images not designed to handle such a random UID will fail with permission errors.
In order to get your image working I recommed you read following Article of the latest OpenShift documentation: https://docs.openshift.com/container-platform/4.2/openshift_images/create-images.html
Here the short version that relates to your issue:
SUPPORT ARBITRARY USER IDS
By default, OpenShift Container Platform
runs containers using an arbitrarily assigned user ID. This provides
additional security against processes escaping the container due to a
container engine vulnerability and thereby achieving escalated
permissions on the host node.
For an image to support running as an arbitrary user, directories and
files that may be written to by processes in the image should be owned
by the root group and be read/writable by that group. Files to be
executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file
permissions to allow users in the root group to access them in the
built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the container user can read and write
these files. The root group does not have any special permissions
(unlike the root user) so there are no security concerns with this
arrangement. In addition, the processes running in the container must
not listen on privileged ports (ports below 1024), since they are not
running as a privileged user.
Adding this to my Dockerfile worked:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
The following changes in the Dockerfile solved the issue for me
#Add user
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 ubuntu
#Add permissions
RUN chown -R ubuntu:root /home/ubuntu && \
chgrp -R 0 /home/ubuntu && \
chmod -R 775 /home/ubuntu
##Specify the user with UID
USER 1001
OpenShift runs the Pod with arbitrary UID by default. You can check what UID is used by in the pod using oc rsh <pod name> id. Your pod might run not using admin you specified.
The container you built is required admin permission, so you should configure anyuid SCC to the default serviceaccount for avoid the permission error.
Further information about SCC is here: Enable Container Images that Require Root
.
e.g.> default scc is using to run pod by default
$ oc adm policy add-scc-to-user anyuid -z default
$ oc delete pod <your pod name>
FYI, before building your image, checking here:General Container Image Guidelines
is helpful to build your image used on OpenShift.
I'm working through a tutorial on Kubernetes on Azure tutorial (here) and everything has worked perfectly until I run the kubectl apply to configure the cluster:
bash-4.4# kubectl apply -f azure-vote-all-in-one-redis.yaml
error: the path "azure-vote-all-in-one-redis.yaml" does not exist
I found this question which is right on point if I were trying to use a URL for the file.
I've tried:
kubectl apply -f azure-vote-all-in-one-redis.yaml
and
kubectl apply -f /Users/bill/Documents/GitHub/azure-voting-app-redis/azure-vote-all-in-one-redis.yaml
The kubectl command is run from the AZ CLI (I'm using VSCode - with everything configured for Azure and Docker - no problems seeing anything).
if I ls from the AZ CLI I get:
bash-4.4# ls
azure-cli dev home media proc run srv tmp var
bin etc lib mnt root sbin sys usr
bash-4.4#
I've also looked through the docs for AZ CLI and kubectl and every indication is that it should simply work. I also tried kubectl from the console which obviously didn't work...
As I know, most command like this, when you execute the command with an argument file, you should in the same directory which the file in or with an absolute path of the file.
So you can use two ways to execute the command:
First, you can go into the directory which the file azure-vote-all-in-one-redis.yaml in and then execute the command kubectl apply -f azure-vote-all-in-one-redis.yaml.
Second, you can use an absolute path. How to get the path? You can go into the directory and execute the command pwd. Or you can use the command whereis azure-vote-all-in-one-redis.yaml to get the path.
Update
Here is my test screenshot:
Hope it will help you!
I am trying to run a docker image by
docker run -it -v $PWD/examples:/home/user/examples image
which should make $PWD/examples in the host accessible in the container. However when I ls in the container, it keeps giving me
ls: cannot access 'examples': Permission denied
I have tried the answers for similar questions, the z/Z option and chcon -Rt svirt_sandbox_file_t /host/path/ and run --privileged, but neither of them have any effect in my case.
In fact, the z option appears to work for the first time ls, but when I issue ls the second time it is denied again.
In the comments it turned out that there is probably a USER instruction in the Dockerfile of the image. This user is not allowed to access examples due to file access permissions of examples.
It is possible to supersede USER with docker run option --user.
A quick and dirty solution is to run with --user=root to allow arbitrary access.
Be aware that files written as root in container to folder examples will be owned by root.
A better solution is to look for owner of examples, call him foo. Specify its user id and group id to have exactly the same user in container:
docker run --user $(id -u foo):$(id -g foo) imagename
Another possible solution is to allow arbitray access with chmod 666 examples or chmod 644 examples, but most probably you don't want that.
The best way would be to look at the Dockerfile and check the purpose of USER instruction.
If it only serves the purpose of avoiding root in container, the best way is to use --user=foo or more precisely --user=$(id -u foo):$(id -g foo).
If something in Dockerfile/image relies on specific USER, it may be the best to change access permissions of examples.
If you have access to the Dockerfile, you may adjust it to fit your host user/the owner of examples.
Try running the container as privileged:
sudo docker run --privileged=true -itd -v /***/***:/*** ubuntu bash
for example: sudo docker run --privileged=true -itd -v
/home/willie:/wille ubuntu bash