I have installed Jenkins on Linux RHEL7 and i did below changes to run all my Jenkins job with a root user rather than Jenkins user.
The changes I made:
Inside file /etc/sysconfig/jenkins:
JENKINS_USER="root" (default it used to be "jenkins")
Changed the ownership of files using below commands:
sudo chown -R root:root /var/lib/jenkins
sudo chown -R root:root /var/cache/jenkins
sudo chown -R root:root /var/log/jenkins
Now when I am creating a job named "snjiii", it runs correctly i.e., using root user:
Console Output
Started by user FSQA ADMIN
Running as SYSTEM
Building on master in workspace /var/lib/jenkins/workspace/snjiii
[snjiii] $ /bin/sh -xe /tmp/jenkins8170584152885597665.sh
+ whoami
root
Finished: SUCCESS
But when I am creating a different job named as "snjiii_newjob_l1_rhl7", it's running as jenkins user:
Console Output
Started by user FSQA ADMIN
Running as SYSTEM
Building remotely on Slave_1_14 (Redhat_linux) in workspace /var/lib/jenkins/workspace/snjiii_newjob_l1_rhl7
[snjiii_newjob_l1_rhl7] $ /bin/sh -xe /tmp/jenkins2597327522922634209.sh
+ whoami
jenkins
Finished: SUCCESS
I need to create job and run it using root user irrespective of naming convention. How to do it???
Even though I was running the job on my master, I created a master-slave configuration for the same wherein I created a node with master credentials and used it as my slave i.e. I restricted my project to run on the slave machine which is my master itself and henceforth it uses ssh credentials to communicate, it resolves my issue.
Related
I had changed the permissions of /home/centos to sudo chmod -R 777 /home on which jenkins is hosted after that i'm not able to login to the machine then i had created one more instance and detached the jenkins disk attached to new machine changed the permissions then attached the disk to the jenkins machine. From then i'm able to login but when starting the jenkins it is saying active(exited).
As it is regarding with the permission' i tried to run 'jenkinsuser=root' it worked fine.
I am new to Openshift.
I have deployed an application in openshift. When I checked the logs, there is permission denied error for some files.
Now, I want to change the permissions on the the container that is already deployed in Openshift, but I am getting, "Operation not permitted" warning. How do I fix this ?
This is for linux running latest version of MongoDB. I have already tried executing RUN chmod 777 /path/to/directory in my docker file, created the image and pulled the same image in my yaml file, which I am deploying in my openshift. However, when I check my docker container, it shows that the permissions are changed for that directory, but when I deploy, I get the warning in my logs as "permission denied".
FROM node:10.16.3
RUN apt update && apt install -y openjdk-8-jdk
RUN useradd -ms /bin/bash admin
# Set the workdir /var/www/myapp
WORKDIR /var/www/myapp
# Copy the package.json to workdir
COPY package.json .
# Run npm install - install the npm dependencies
RUN npm install
RUN npm install sqlite3
# Copy application source
COPY . .
RUN chown -R admin:admin /var/www/myapp
RUN chmod 775 /var/www/myapp
USER admin
# Copy .env.docker to workdir/.env - use the docker env
#COPY .env.docker ./.env
# Expose application ports - (4300 - for API and 4301 - for front end)
# EXPOSE 4300 4301
EXPOSE 52000
CMD [ "npm", "start" ]
Athough, when I run my dockerifle, the permissions have changed, but when I try to deploy in my openshift, I get permission denied for some files in that directory.
By default any container started in OpenShift gets a random user ID. Therefor images not designed to handle such a random UID will fail with permission errors.
In order to get your image working I recommed you read following Article of the latest OpenShift documentation: https://docs.openshift.com/container-platform/4.2/openshift_images/create-images.html
Here the short version that relates to your issue:
SUPPORT ARBITRARY USER IDS
By default, OpenShift Container Platform
runs containers using an arbitrarily assigned user ID. This provides
additional security against processes escaping the container due to a
container engine vulnerability and thereby achieving escalated
permissions on the host node.
For an image to support running as an arbitrary user, directories and
files that may be written to by processes in the image should be owned
by the root group and be read/writable by that group. Files to be
executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file
permissions to allow users in the root group to access them in the
built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the container user can read and write
these files. The root group does not have any special permissions
(unlike the root user) so there are no security concerns with this
arrangement. In addition, the processes running in the container must
not listen on privileged ports (ports below 1024), since they are not
running as a privileged user.
Adding this to my Dockerfile worked:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
The following changes in the Dockerfile solved the issue for me
#Add user
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 ubuntu
#Add permissions
RUN chown -R ubuntu:root /home/ubuntu && \
chgrp -R 0 /home/ubuntu && \
chmod -R 775 /home/ubuntu
##Specify the user with UID
USER 1001
OpenShift runs the Pod with arbitrary UID by default. You can check what UID is used by in the pod using oc rsh <pod name> id. Your pod might run not using admin you specified.
The container you built is required admin permission, so you should configure anyuid SCC to the default serviceaccount for avoid the permission error.
Further information about SCC is here: Enable Container Images that Require Root
.
e.g.> default scc is using to run pod by default
$ oc adm policy add-scc-to-user anyuid -z default
$ oc delete pod <your pod name>
FYI, before building your image, checking here:General Container Image Guidelines
is helpful to build your image used on OpenShift.
I have a dockerfile with an entrypoint that starts a java app jar in shell process.I have set USER nobody in dockerfile before entrypoint to make sure the entrypoint shell is executed by nobody user and not root. The task definition for the container on AWS ECS takes care of the volume and mount parameters. So ideally at container run time /var/log/applogs on host is mounted onto the container with same path. But what happens is that since the user context is switched from root to nobody, so nobody that runs the container (executing the entrypoint) doesn’t have write permission to /var/log/applogs/ directory on the container. As such the app is unable to create the log file there on the container.
What i see is that the host ec2 /var/log/applogs gets mounted just fine on the container being run by nobody user but since the nobody user is missing write permissions on this path on the container, hence its unable to write app log here.
What is the correct way to fix this either on the dockerfile or something else?
You can try this...
Remove the USER nobody entry from Dockerfile and change your
Entrypoint into the following script:
#!/bin/bash
# change ownership to user=nobody, group=nobody
chown -R nobody:nobody /var/log/applogs
# open with sudo a new shell as user nobody and call your application
sudo -u nobody /bin/bash -c "java -jar /path/to/your/application.jar"
# alternative
# open with su a new shell as user nobody and call your application
# su -s /bin/bash -c "java -jar /path/to/your/application.jar" nobody
I have a simple tomcat docker file as below.
FROM tomcat:7
MAINTAINER ***
COPY sample.war /opt/tomcat/webapps
USER tomcat
RUN chown -R tomcat:tomcat /tmp/
CMD ["catalina.sh", "run"]
When i create the docker image it will give me file permission as
tomcat tomcat Dec 11 08:04 tmp
My sample app creates few directories inside tmp folder , i am expecting the owner to be tomcat but it looks like root. Since i am running the container as user tomcat , how to make it use tomcat user to create these.
I tried to build and run the Dockerfile you provided and ran into multiple errors. You are asking about the file permissions your "app" creates. So here is my starting point:
I assume that the "app" is catalina.sh. The process which creates your files in /tmp/. Since we are running the container as the user tomcat it automatically creates the files with the according file permissions. Have a look at the code comments below to get some more information about what is going on here.
Dockerfile:
FROM httpd
# I switched the image since you would need to configure tomcat to make it run
# httpd works out of the box without any application specific config. That's why
COPY ./catalina.sh /
RUN chmod +x /catalina.sh # copy your 'app' / entrypoint into the image
RUN groupadd -r tomcat \
&& useradd -r -g tomcat tomcat
# create a new user and group under which the container runs
RUN chown -R tomcat:tomcat /tmp/ # change initial file permisisons.
# So that our new user tomcat is allowed to access and fill the directory with files
USER tomcat # switch the user under which the container runs
ENTRYPOINT ["/catalina.sh"]
catalina.sh:
#!/bin/bash
mkdir /tmp/test # create a dir
touch /tmp/yolo.txt # and some files
touch /tmp/test/yolo2.txt # to check the file permissions later on
while true; do echo "sleep"; sleep 2; done
# endless loop so that the container doesn't exit
To check the file permissions exec into the running container.
docker exec -ti <container_name> bash
Docker normally runs with root privilege, so I believe you must create a docker group (if it does not exist), and add the user (tomcat, in your case) to docker group.
See below how to add user to docker group:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker tomcat
Reference: https://docs.docker.com/engine/installation/linux/linux-postinstall/#manage-docker-as-a-non-root-user
With this, I believe your problem can be solved.
I have a cron job that needs to be run under ec2-user on my EC2 instance and it needs to be able to write to the standard log files for my web app. However, the log files are owned by webapp (as per normal).
I've successfully changed the permissions on the log files so that they are accessible by both the owner and the group webapp:webapp. But where I'm running into trouble is when I try to add the ec2-user to the webapp group.
I can do it fine in SSH with sudo usermod -a -G webapp ec2-user but when I try to add this command via EB container-commands, I get an error saying that you must have a tty to run sudo. Running the command without sudo gives me /bin/sh: usermod: command not found.
Anybody know of any other way to be able to add ec2-user to the webapp group via the Elastic Beanstalk deployment config.
Not sure about the issue with the sudoers file, but generally a cleaner way to add a user to a group (than manually executing a command) is to use the users section of the .ebextensions file. For example, in .ebextensions/something.config:
users:
ec2-user:
groups:
- webapp
You should not use sudo, the deploy script is ran by root.
Also, this is a server command, do it in the commands section instead of container commands section.
commands:
01_set_user_role:
command: "usermod -a -G webapp ec2-user"
You need to run this command from a container_command before executing any commands with sudo:
echo Defaults:root \!requiretty >> /etc/sudoers
In context (in .ebextensions/yourconf.config)
container_commands:
001-enableroot:
command: echo Defaults:root \!requiretty >> /etc/sudoers #disables error related to needing a tty for sudo, allows running without cli