Failed opening the RDB file ... Read-only file system - linux

I'm trying to perform a save or bgsave on my redis instance to run through the backup/restore process. I'm getting errors when I try to save however:
532:M 28 Jun 23:58:30.396 # Failed opening the RDB file backup.rdb (in server root dir /var/lib/redis) for saving: Read-only file system
Permissions on the /var/lib/redis folder:
$#/var/lib$ ls -artl | grep redis
drwxrwxrwx 3 redis redis 4096 Jun 28 23:58 redis
Permissions on the /var/lib folder:
$#/var$ ls -artl | grep lib
drwxrwxrwx 31 root root 4096 Jun 28 23:44 lib
Permissions on the /var folder:
$#/$ ls -artl | grep var
drwxrwxrwx 11 root root 4096 Jul 18 2016 var
Redis CLI output for config get dir:
1) "dir"
2) "/var/lib/redis"
Redis CLI output for config get dbfilename:
1) "dbfilename"
2) "backup.rdb"
Error from redis:
532:M 28 Jun 23:58:30.396 # Failed opening the RDB file backup.rdb (in server root dir /var/lib/redis) for saving: Read-only file system
Any help would be much appreciated!

You need to add the following to your /etc/systemd/system/redis-server unit file:
ReadWriteDirectories=-/var/lib/redis
Note that /var/lib/redis is the default, but if in your /etc/redis/redis.conf you set a different dir config option, you will need to set ReadWriteDirectories to that.

Error says Read-only file system
So, check the mounting (/ or /var) of file system, if it is read only, remount the FS with rw mode(read and write mode)
Take backup of important data before mounting.

Related

Linux Named Pipe Mounted on Docker Volume Showing as Regular File

I am trying to use a named pipe to run certain commands from a dockerised guest application to the host.
I am aware of the risks and this is not public facing, so please no comments about not doing this.
I have a named pipe configured on the host using:
sudo mkfifo -m a+rw /path/to/pipe/file
When I check the created pipe permissions with ls -la file, it shows the pipe has been created and intended permissions are set.
prw-rw-rw- 1 root root 0 Feb 2 11:43 file
When I then test the input by catting a command into the pipe from the host, this runs successfully.
Input
echo "echo test" > file
Output
[!] Starting listening on named pipe: file
test
The problem appears to be within my docker container. I have created a volume and mounted the named pipe from the host. When I then start an sh session and ls -l however, the file named pipe appears to be a normal file without the p and permission properties present on the host.
/hostpipe # ls -la
total 12
drwxr-xr-x 2 root root 4096 Feb 1 16:25 .
drwxr-xr-x 1 root root 4096 Feb 2 11:44 ..
-rw-r--r-- 1 root root 11 Feb 2 11:44 file
Running the same and similar echo "echo test" > file does not work from within the guest.
The host is a Linux desktop on baremetal.
Linux desktop 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
And the guest is an Alpine image
FROM python:3.8-alpine
and
Linux b16a4357fcf5 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 Linux
Any idea what is going wrong here?
The issue was how the container was being set up. I was using a regular volume used for persisting data not mounting drives and files. I had to change my definition to use the - type: bind
Using volumes without the bind parameter does not allow use of the host file system functionality and only allows data sharing.
Before
volumes:
- static_data:/vol/static
- ./web:/web
- /opt/named_pipes/:/hostpipe
After
volumes:
- static_data:/vol/static
- ./web:/web
- type: bind
source: /opt/named_pipes/
target: /hostpipe

Unable to write to a file with group permissions

We are getting the error "permission denied" when trying to write to a file that is owned by a service user and a shared group. In particular that is www-data:www-data and the user trying to write to it is in the group www-data.
There is no acl on none of the parent folders and the permissions to the file and folders are correct.
Here some details:
$ sudo -u deploy id -Gn
www-data
$ ls -lah /tmp
drwxrwxrwt 17 root root 4.0K Jul 11 11:22 .
drwxr-xr-x 23 root root 4.0K Jul 8 10:08 ..
...
-rw-rw-r-- 1 www-data www-data 0 Jul 11 10:50 test
...
$ echo 'hello world' | sudo -u deploy tee -a /tmp/test
tee: /tmp/test: Permission denied
hello world
we tried that on different folders and made sure there is no acl on any of the folders or parents or files...
Unfortunately that is not described in the link stark posted in the comment. And also not in any other page I found until I found an answer here on stackoverflow that clarified it.
2018 two new filesystem configurations got added to sysctl that should prevent regular files and fifos from beeing opened with the O_CREAT flag (as append mode is doing) in directories with the sticky bit set unless the user is the owner of the file. This commit added the settings: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=30aba6656f61ed44cba445a3c0d38b296fa9e8f5
To change that behaviour you have to set fs.protected_regular to 0:
sudo sysctl fs.protected_regular=0
Or to persist the change add fs.protected_regular=0 to your sysctl.conf.
side node: since O_CREAT is not deleting or renaming the file I'm wondered why it is connected to the sticky bit. It really is possible to create a file in directories with the stick bit set.

file zip/tar in linux at specific location

I want to zip a set of directories and files on my centos 8 VM.
There are 3 directories and 1 file which I want to zip in such a way that only env.conf file will move to /etc/env.txt after unzipping it and remaining directories will be unzipped at current location.
Is there any way to achieve this.
drwxr-xr-x. 9 root root 114 Feb 25 12:40 config
-rw-r--r--. 1 root root 340 Feb 25 09:01 env.conf
drwxr-xr-x. 9 root root 4096 Feb 28 05:11 platform
drwxr-xr-x. 2 root root 135 Feb 28 07:49 install
I don't think this is possible. in fact this is considered a vulnerability if you could do that.
Imagine you download a zip file from some website. and after you unzip it in a temp folder. It registers itself as a service by writing a file in /etc somewhere, and gets control over your pc.
Example: zip-slip
You could however create a one-liner that extracts and moves the file wherever you want like this:
unzip <filename> && mv env.conf /etc/env.txt

Programmatically create a btrfs file system whose root directory has a specific owner

Background
I have a test script that creates and destroys file systems on the fly, used in a suite of performance tests.
To avoid running the script as root, I have a disk device /dev/testdisk that is owned by a specific user testuser, along with a suitable entry in /etc/fstab:
$ ls -l /dev/testdisk
crw-rw---- 1 testuser testuser 21, 1 Jun 25 12:34 /dev/testdisk
$ grep testdisk /etc/fstab
/dev/testdisk /mnt/testdisk auto noauto,user,rw 0 0
This allows the disk to be mounted and unmounted by a normal user.
Question
I'd like my script (which runs as testuser) to programmatically create a btrfs file system on /dev/testdisk such that the root directory is owned by testuser:
$ mount /dev/testdisk /mnt/testdisk
$ ls -la /mnt/testdisk
total 24
drwxr-xr-x 3 testuser testuser 4096 Jun 25 15:15 .
drwxr-xr-x 3 root root 4096 Jun 23 17:41 ..
drwx------ 2 root root 16384 Jun 25 15:15 lost+found
Can this be done without running the script as root, and without resorting to privilege escalation (use of sudo) within the script?
Comparison to other file systems
With ext{2,3,4} it's possible to create a filesystem whose root directory is owned by the current user, with the following command:
mkfs.ext{2,3,4} -F -E root_owner /dev/testdisk
Workarounds I'd like to avoid (if possible)
I'm aware that I can use the btrfs-convert tool to convert an existing (possibly empty) ext{2,3,4} file system to btrfs format. I could use this workaround in my script (by first creating an ext4 filesystem and then immediately converting it to brtfs) but I'd rather avoid it if there's a way to create the btrfs file system directly.

anacron script in cron.daily not running via symlink

What can I do to make this script run daily?
If I manually run the script, it works. I can see that it did what it's supposed to do. (backup files) However, it will not run as a cron.daily script. I've let it go for days without touching it -- and it never runs.
The actual script is here /var/www/myapp/backup.sh
There is a symlink to it here /etc/cron.daily/myapp_backup.sh -> /var/www/myapp/backup.sh
The cron log at /var/log/cron shows anacron running this script:
Aug 19 03:09:01 ip-123-456-78-90 anacron[31537]: Job `cron.daily' started
Aug 19 03:09:01 ip-123-456-78-90 run-parts(/etc/cron.daily)[31545]: starting myapp_backup.sh
Aug 19 03:09:01 ip-123-456-78-90 run-parts(/etc/cron.daily)[31559]: finished myapp_backup.sh
Yet there is no evidence that the script actually did anything.
Here is the security info on these files:
ls -la /var/cron.daily
<snip>
lrwxrwxrwx 1 root root 25 Aug 12 21:18 myapp_backup.sh -> /var/www/myapp/backup.sh
</snip>
ls -la /var/www/myapp
<snip>
drwxr-xr-x 2 root root 4096 Aug 13 13:55 .
drwxr-xr-x 10 root root 4096 Jul 12 01:00 ..
-rwxr-xr-x 1 root root 407 Aug 12 23:37 backup.sh
-rw-r--r-- 1 root root 33 Aug 12 21:13 list.txt
</snip>
The file called list.txt is used by backup.sh.
The script just runs tar to create an archive.
From the cron manpage of a debian/ubuntu system:
the files under these directories have to be pass some sanity checks including the following: be executable, be owned by root, not be writable by group or other and, if symlinks, point to files owned by root. Additionally, the file names must conform to the filename requirements of run-parts: they must be entirely made up of letters, digits and can only contain the special signs underscores ('_') and hyphens ('-'). Any file that does not conform to these requirements will not be executed by run-parts. For example, any file containing dots will be ignored.
So:
file need to be owned by root
if symlink, the source file need to be owned by root
if symlink, the link name should NOT contain dots
I had a similar situation with cron.hourly and awstats processing.
I THINK it is related to SELinux and anacron not having the same powers/permissions as cron.
The ACTUAL solution defeated me (so far).
MY WORKAROUND SOLUTION: Run the job via root's cron entries (crontab -e ) and simply schedule it hourly.

Resources