Executable script gets permission denied on Linux box - linux

trying to run a script on Ubuntu 14.04.
$ bash MirroirHome
runs fine
but
$ ./MirroirHome
bash: ./MirroirHome: Permission denied
$ ls -l
total 32
-rwxr-xr-x 1 gerald gerald 214 nov 14 15:44 MirroirHome
I am the owner of the file and the permission bit is on, what is going on??
Here is the script in case it matters.
#!/bin/bash
rsync \
--archive \
--verbose \
--compress \
--update \
--delete \
/home/ /media/Data/MirroirHome

This can happen if the partition is mounted with the noexec flag on. You can verify this by running mount: find the partition in the output, and in the list of flags, probably there will be noexec.
To resolve this, remount the partition without the noexec flag. Or, copy the script to another partition that's already mounted without noexec.

Related

Why commands like "su", "passwd" in busybox cannot work properly?

Background:
Debugging linux kernel 6.0 with qemu-system-x86_64. The start commandline is as follows:
qemu-system-x86_64 -kernel ./bzImage -initrd ./rootfs.img -serial stdio -append " console=ttyS0 nokaslr"
The initrd rootfs.img is made by busybox-1.35.0 using the following commands:
$ make menuconfig #choose [*] Build static binary (no shared libs)
$ make && make install
$ cd _install
$ ls
bin linuxrc sbin usr
$ mkdir -p dev proc etc sys\kernel\debug sys\dev
$ vim init
The init file is filled with:
#!/bin/sh
echo "{==DBG==} INIT SCRIPT"
mkdir /tmp
mount -t proc none /proc
mount -t sysfs none /sys
mount -t debugfs none /sys/kernel/debug
mount -t tmpfs none /tmp
mdev -s
echo -e "{==DBG==} Boot took $(cut -d' ' -f1 /proc/uptime) seconds"
# normal user
setsid /bin/cttyhack setuidgid 1000 /bin/sh
$ find . | cpio -o --format=newc > ./rootfs.img
================================================================
The problem:
When I runqemu-system-x86_64 -kernel ./bzImage -initrd ./rootfs.img -serial stdio -append " console=ttyS0 nokaslr" to start qemu. And enter the kernel successfully. But when I run "su" the problem occurs:
{==DBG==} INIT SCRIPT
{==DBG==} Boot took 2.63 seconds
/ $ su
su: must be suid to work properly
/ $
================================================================
What I tried:
I tried to google the problem. But only find to escalate the privilege.
Then I tried:
/ $ cd bin
/bin $ chmod u+s busybox
/bin $ ls -l busybox
-rwsr-xr-x 1 1000 1000 2408664 Oct 11 12:57 busybox
/bin $ su
su: must be suid to work properly
/bin $
Obviously the 'solution' failed.
================================================================
So what can I do to solve this problem? Or what causes this problem? Any help would be appreciated! Thanks in advance!
The suid bit that you added with chmod u+s busybox changes the current user to the owner of /bin/busybox, which as you can see is 1000.
So you want to change /bin/busybox to be owned by root:
$ chown root:root /bin/busybox
But you won't be able to do that from within your non-root shell; you must make this change in the root image rootfs.img.
It probably makes sense to have all files in the image owned by root. You don't need to change the ownership in the host file system, because you can do it while building the image:
$ find . | cpio -o --format=newc --owner=+0:+0 > ./rootfs.img
^^^^^^^^^^^^^

Where would I find the kernel .config file in Linux ubuntu?

I'm copying my kernel config file from an existing system to the kernel tree, and I entered this command:
/boot/config$(uname -r)
Yet I got:
bash: /boot/config-5.15.0-46-generic: Permission denied
Does anyone know why its saying permission denied and how to fix? I am using Ubuntu in VirtualBox.
As #Tsyvarev mentioned in comments it is possible that you're trying to execute file, that have no exec permissions:
$ ls -l /boot
...
-rw-r--r-- 1 root root 217414 Aug 20 2021 config-5.4.0-rc1+
...
Try to run: cat /boot/config-$(uname -r) to read the config file for currently running kernel.

How to install wget on LFS system

I am pretty newbie to Linux and started LFS because I needed it for school. So my system is now perfectly running with an internet connection, but I still don't have any packet manager or something. The first binary I would like to have is basically wget, but I really don't know how to do...
Could someone explain to me please ?
I personally used (and would highly recommend) using the existing Linux system (the host) to download the wget package and its dependencies before booting your LFS system for the first time. However, seeing that you're already using your LFS system, if you still have the ability to log using the host, then use it to download wget as if it was one of the sources that you got when building the LFS system.
For me, I used a Linux Mint Host running in VirtualBox to build my LFS. To get wget I just had to re-add the Linux Mint host storage, and download wget and added it to the LFS sources. I then removed the Linux Mint host storage, logged in to my LFS machine, then followed the steps in BLFS.
Note: this is mainly just from parts of lfs and the wget page of blfs.
1. Boot into your host OS.
2. Enter the following commands in the command line to get into chroot(edit depending on your partitions and where you mount lfs):
sudo su -
export LFS=/mnt/lfs
mount -vt ext4 /dev/sda4 $LFS
mount -v --bind /dev $LFS/dev
mount -vt devpts devpts $LFS/dev/pts -o gid=5,mode=620
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
mount -vt tmpfs tmpfs $LFS/run
if [ -h $LFS/dev/shm ]; then
mkdir -pv $LFS/$(readlink $LFS/dev/shm)
fi
chroot "$LFS" /usr/bin/env -i \
HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
PATH=/bin:/usr/bin:/sbin:/usr/sbin \
/bin/bash --login
3. Download wget from http://ftp.gnu.org/gnu/wget/wget-1.19.1.tar.xz and copy it into /mnt/lfs/sources from your host os.
4. Unpack and cd into it with:
tar -xf wget-1.19.1.tar.xz
cd wget-1.19.1
5. Configure and install wget with:
./configure --prefix=/usr \
--sysconfdir=/etc \
--with-ssl=openssl &&
make
make install
6. Delete the wget-1.19.1 folder if you want and your done!

Handle permissions with groups in linux

I can't understand how exactly this works in Linux.
For example, I want only users in some group have access to execute some file (I hope this is possible without visudo).
I create a system user and system group like:
useradd -K UID_MIN=100 -K UID_MAX=499 -K GID_MIN=100 -K GID_MAX=499 -p \* -s /sbin/nologin -c "testusr daemon,,," -d "/var/testusr" testusr
I add my current user user to the group testusr (may be not cross platform):
adduser user testusr
I create some test shell file and set permissions:
touch test.sh
chmod ug+x test.sh
sudo chown testusr:testusr test.sh
But I still can't start test.sh as user:
./test.sh
-> Error
Now I look for some system groups like cdrom to check how they work. My user is in cdrom group and can use the cd rom on my computer:
$ ls -al /dev/cdrom
lrwxrwxrwx 1 root root 3 апр. 17 12:55 /dev/cdrom -> sr0
$ ls -al /dev/sr0
brw-rw----+ 1 root cdrom 11, 0 апр. 17 12:55 /dev/sr0
Addition:
./test.sh command starts to work as I want after system reboot. Strange...
I'm on Ubuntu Studio 15.10
The group changes are reflected only upon re-login.

Running app inside Docker as non-root user

After yesterday's news of Shocker, it seems like apps inside a Docker container should not be run as root. I tried to update my Dockerfile to create an app user however changing permissions on app files (while still root) doesn't seem to work. I'm guessing this is because some LXC permission is not being granted to the root user maybe?
Here's my Dockerfile:
# Node.js app Docker file
FROM dockerfile/nodejs
MAINTAINER Thom Nichols "thom#thomnichols.org"
RUN useradd -ms /bin/bash node
ADD . /data
# This next line doesn't seem to have any effect:
RUN chown -R node /data
ENV HOME /home/node
USER node
RUN cd /data && npm install
EXPOSE 8888
WORKDIR /data
CMD ["npm", "start"]
Pretty straightforward, but when I ls -l everything is still owned by root:
[ node#ed7ae33e76e1:/data {docker-nonroot-user} ]$ ls -l /data
total 64K
-rw-r--r-- 1 root root 383 Jun 18 20:32 Dockerfile
-rw-r--r-- 1 root root 862 Jun 18 16:23 Gruntfile.js
-rw-r--r-- 1 root root 1.2K Jun 18 15:48 README.md
drwxr-xr-x 4 root root 4.0K May 30 14:24 assets/
-rw-r--r-- 1 root root 416 Jun 3 14:22 bower.json
-rw-r--r-- 1 root root 930 May 30 01:50 config.js
drwxr-xr-x 4 root root 4.0K Jun 18 16:08 lib/
drwxr-xr-x 42 root root 4.0K Jun 18 16:04 node_modules/
-rw-r--r-- 1 root root 2.0K Jun 18 16:04 package.json
-rw-r--r-- 1 root root 118 May 30 18:35 server.js
drwxr-xr-x 3 root root 4.0K May 30 02:17 static/
drwxr-xr-x 3 root root 4.0K Jun 18 20:13 test/
drwxr-xr-x 3 root root 4.0K Jun 3 17:38 views/
My updated dockerfile works great thanks to #creak's clarification of how volumes work. Once the initial files are chowned, npm install is run as the non-root user. And thanks to a postinstall hook, npm runs bower install && grunt assets which takes care of the remaining install steps and avoids any need to npm install -g any node cli tools like bower, grunt or coffeescript.
Check this post: http://www.yegor256.com/2014/08/29/docker-non-root.html In rultor.com we run all builds in their own Docker containers. And every time before running the scripts inside the container, we switch to a non-root user. This is how:
adduser --disabled-password --gecos '' r
adduser r sudo
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
su -m r -c /home/r/script.sh
r is the user we're using.
Update 2015-09-28
I have noticed this post getting a bit of attention. A word of advice for anyone who is potentially interested in doing something like this. I would try to use Python or another language as a wrapper for your script executions. Doing native bash scripts I had problems when trying to pass through a variety of arguments to my containers. Specifically there was issues with the interpretation/escaping of " and ' characters by the shell.
I was needing to change the user for a slightly different reason.
I created a docker image housing a full featured install of ImageMagick and Ffmpeg with a desire that I could do transformations on images/videos within my host OS. My problem was that these are command line tools, so it is slightly trickier to execute them via docker and then get the results back into the host OS. I managed to allow for this by mounting a docker volume. This seemed to work okay except that the image/video output was coming out as being owned by root (i.e. the user the docker container was running as), rather than the user whom executed the command.
I looked at the approach that #François Zaninotto mentioned in his answer (you can see the full make script here). It was really cool, but I preferred the option of creating a bash shell script that I would then register on my path. I took some of the concepts from the Makefile approach (specifically the user/group creation) and then I created the shell script.
Here is an example of my dockermagick shell script:
#!/bin/bash
### VARIABLES
DOCKER_IMAGE='acleancoder/imagemagick-full:latest'
CONTAINER_USERNAME='dummy'
CONTAINER_GROUPNAME='dummy'
HOMEDIR='/home/'$CONTAINER_USERNAME
GROUP_ID=$(id -g)
USER_ID=$(id -u)
### FUNCTIONS
create_user_cmd()
{
echo \
groupadd -f -g $GROUP_ID $CONTAINER_GROUPNAME '&&' \
useradd -u $USER_ID -g $CONTAINER_GROUPNAME $CONTAINER_USERNAME '&&' \
mkdir --parent $HOMEDIR '&&' \
chown -R $CONTAINER_USERNAME:$CONTAINER_GROUPNAME $HOMEDIR
}
execute_as_cmd()
{
echo \
sudo -u $CONTAINER_USERNAME HOME=$HOMEDIR
}
full_container_cmd()
{
echo "'$(create_user_cmd) && $(execute_as_cmd) $#'"
}
### MAIN
eval docker run \
--rm=true \
-a stdout \
-v $(pwd):$HOMEDIR \
-w $HOMEDIR \
$DOCKER_IMAGE \
/bin/bash -ci $(full_container_cmd $#)
This script is bound to the 'acleancoder/imagemagick-full' image, but that can be changed by editing the variable at the top of the script.
What it basically does is:
Create a user id and group within the container to match the user who executes the script from the host OS.
Mounts the current working directory of the host OS (using docker volumes) into home directory for the user we create within the executing docker container.
Sets the tmp directory as the working directory for the container.
Passes any arguments that are passed to the script, which will then be executed by the '/bin/bash' of the executing docker container.
Now I am able to run the ImageMagick/Ffmpeg commands against files on my host OS. For example, say I want to convert an image MyImage.jpeg into a PNG file, I could now do the following:
$ cd ~/MyImages
$ ls
MyImage.jpeg
$ dockermagick convert MyImage.jpeg Foo.png
$ ls
Foo.png MyImage.jpeg
I have also attached to the 'stdout' so I could run the ImageMagick identify command to get info on an image on my host, for e.g.:
$ dockermagick identify MyImage.jpeg
MyImage.jpeg JPEG 640x426 640x426+0+0 8-bit DirectClass 78.6KB 0.000u 0:00.000
There are obvious dangers about mounting the current directory and allowing any arbitrary command definition to be passed along for execution. But there are also many ways to make the script more safe/secure. I am executing this in my own non-production personal environment, so these are not of highest concern for me. But I would highly recommend you take the dangers into consideration should you choose to expand upon this script. It's also worth me mentioning that this script doesn't take an OS X host into consideration. The make file that I steal ideas/concepts from does take this into account, so you could extend this script to do so.
Another limitation to note is that I can only refer to files currently in the path for which I am executing the script. This is because of the way I am mounting the volumes, so the following would not work:
$ cd ~/MyImages
$ ls
MyImage.jpeg
$ dockermagick convert ~/DifferentDirectory/AnotherImage.jpeg Foo.png
$ ls
MyImage.jpeg
It's best just to go to the directory containing the image and execute against it directly. Of course I am sure there are ways to get around this limitation too, but for me and my current needs, this will do.
This one is a bit tricky, it is actually due to the image you start from.
If you look at the source, you notice that /data/ is a volume. So everything you do in the Dockerfile will be discarded and overridden at runtime by the volume that gets mounted then.
You can chown at runtime by changing your CMD to something like CMD chown -R node /data && npm start.
Note: I answer here because, given the generic title, this Question pops up in google when you look for a solution to "Running app inside Docker as non-root user". Hope it helps those who are stranded here.
With Alpine Linux you can create a system user like this:
RUN adduser -D -H -S -s /bin/false -u 1000 myuser
Everything in the Dockerfile after this line is executed with myuser.
myuser user has:
no password assigned
no home dir
no login shell
no root access.
This is from adduser --help:
-h DIR Home directory
-g GECOS GECOS field
-s SHELL Login shell
-G GRP Add user to existing group
-S Create a system user
-D Don't assign a password
-H Don't create home directory
-u UID User id
-k SKEL Skeleton directory (/etc/skel)
Note: This answer is given because many people looking for non-root usage will end up here. Beware, this does not address the issue that caused the problem, but is addressing the title and clarification to an answer given by #yegor256, which uses a non-root user inside the container. This answer explains how to accomplish this for non-debian/non-ubuntu use-case. This is not addressing the issue with volumes.
On Red Hat-based systems, such as Fedora and CentOS, this can be done in the following way:
RUN adduser user && \
echo "user ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/user && \
chmod 0440 /etc/sudoers.d/user
In your Dockerfile you can run commands as this user by doing:
RUN su - user -c "echo Hello $HOME"
And the command can be run as:
CMD ["su","-","user","-c","/bin/bash"]
An example of this can be found here:
https://github.com/gbraad/docker-dev/commit/644c51002f4b8e6fe5bb745638542a4c3d908b16

Resources