My app uses log4j and writes the logs to directory A which is in root directory. I want to move the logs out to a mounted drive without making any change in the application.
Can I use soft symlink to do this? I have created a symlink like this -
ln -s A mounted_drive_directory
But I still see logs written to directory A.
ln [OPTION]... [-T] TARGET LINK_NAME, so your arguments order is wrong. You'll have to delete (or move) A first before creating the link, or filename conflict will occur.
You could also use mountpoint bindings for that, e.g. mount --rbind /mounted/drive/directory /full/path/to/A, but it have to be done on each system boot (or saved in /etc/fstab to be auto-executed on boot).
ln works a little bit different:
first argument is real folder\file, second - symlink.
mv /root/A /root/B;
ln -s mounted_drive_directory /root/A;
Related
After setting up my Raspberry Pi, I made an image to make reverting to older software states easier. Recently I wanted to do that so I saved the content of my /home/pi folder, formated the sd-card and wrote the image onto it.
So far everything worked fine. Then I tried to simply delete the complete /home/pi folder and replace it with my previously saved folder from the old image. Now it seems like all files are there. But it doesnt boot correctly.
At some point it just stops to boot. I can then use it normally like the terminal, but Desktop is not starting.
So, how can I replace my home directory the right way so I don't make any damage to the system?
edit:
I just tried to do this again.
sudo cp -a /home/pi/fileserver/backup /home/backup
(i mounted a network drive in fileserver. Since network is on windows i assume all permissions are already gone here)
cp -a /home/pi/. /home/original
sudo umount /home/pi/fileserver
rm -r /home/pi/
mv /home/backup /home/pi
sudo chmod -R 755 /home/pi (So far everything still works)
sudo reboot
After reboot it doesnt boot correctly anymore. When I wait long enough I see errors of X Server.
That's quite doubtful approach to archiving the data. First of all, as you mentioned, windows will remove the permission bits. Running chmod -R 755 afterwards has very bad consequences because some programs in order to work require very specific access bits on some files (ssh keys for example). Not to mention that making everything executable is bad for security.
Considering your scenario, you may either
a) backup everything into Tar or Zip archives - this way permissions will be intact
b) Make virtual disk file which will be stored on shared windows drive and mounted to /home/pi
How to do scenario A:
cd /home/pi
tar cvpzf backup.tar.gz .
Copy backup.tar.gz to shared drive
to unpack:
cd /home/pi
tar xpvzf backup.tar.gz
Pros:
One-line backup
Takes small amount of space
Cons:
Packing/unpacking takes time
How to do scenario B:
1) Create a new file to hold the virtual drive volume:
cd /mnt/YourNetworkDriveMountPoint
fallocate -l 500M HomePi.img
dd if=/dev/zero of=HomePi.img bs=1M count=500
mkfs -t ext3 HomePi.img
2) Mount it to home dir
mount -t auto -o loop HomePi.img /home/pi/
500 means the disk will be 500 megabytes in size
This way your whole pi will be saved as a file on windows shared drive, but all the content will be in ext3 so all permissions are preserved.
I suggest you though to keep the current version image file on Pi device itself and the old versions on shared drive. Just copy files over if you need to switch because otherwise if all images are on shared drive then read/write performance will be 100% dependant on network speed.
You can then easily make copies of this file and swap them instantly by unmounting existing image and mounting new one
Pros:
Easy swap between backup versions
Completely transparent process
Cons:
If current image file is on shared drive, performance will be reduced
It will consume considerably more space because all 500 megs will be preallocated.
Pi user must be logged off during image swap for obvious reasons
Now, as for issues with Desktop not displayed, you need to check /var/log/Xorg.0.log for detailed messages. Likely this is caused by messed permissions. I would try to rename/remove your current Xorg settings and cache which are located somewhere in /home/Pi/.config/ (depends on what you're using - XFCE, Gnome, etc.) and let X server recreate them. But again, before doing this please check Xorg.0.log for exact messages - maybe there's another error. If you need any further help please comment to this answer
First off, I am using Bubblewrap as the sandboxing software, but I feel like it is a general mounting issue, than a bubblewrap one. I am trying to add bwrap into a sandbox wrapper called sandboxlib, the details are not important, other than the tests that are run.
One particular test tries to mount the sandbox / from "/foo/bar". This contains 2 sub-directories, data and bin.
The bin directory simply contains a simple binary called 'test-file-is-writable'.
If I run:
$ /usr/bin/bwrap --ro-bind /foo/bar / --tmpfs /data test-file-is-writable data/1/canary
Couldn't open data/1/canary for writing.
HOWEVER, mounting / as writable works
$ /usr/bin/bwrap --bind /foo/bar / --tmpfs /data test-file-is-writable data/1/canary
Wrote data to data/1/canary.
However, I am only wanting /data to be writable, and assuming the rest of / to be ro
Adding in a remount as readonly still doesn't fix things
$ /usr/bin/bwrap --ro-bind /foo/bar / --tmpfs /data --remount-ro / test-file-is-writable data/1/canary
Couldn't open data/1/canary for writing.
Debugging this further, I added in mounts/paths required to drop into an interactive shell inside the sandbox
$ /usr/bin/bwrap --bind /foo/bar / --tmpfs /data --ro-bind /lib /lib --ro-bind /lib64 /lib64 --ro-bind /bin /usr/bin --remount-ro / bash
Running a simple ls of / shows everything is mounted as expected. Testing r/w is all fine. The issue, however, is the /data directory is totally empty (other than the output of my 'touch /data/testwrite'). Note the original /data partition I wanted to mount, actually contains files.
Q. Am I not understanding the mounting here? Or are the tests wrong?
My only work around I can see is to copy over files from the original ro /data to the newly write-mounted /data
data/1/canary is a relative path and the current directory is not the root directory, so you are trying write to somewhere else
this happens when I have
an executable that is in the /tmp directory (say /tmp/a.out)
it is run by a root shell
linux
selinux on (default for RedHat, CentOS, etc)
Apparently trying to run an executable that sits in the /tmp/directory as root revokes the privileges. Any idea how to go around this issue, other than turning off selinux? Thanks
You can set file context on binary or directory (containing binary) that are in /tmp that you want to run.
sudo semanage fcontext -a -t bin_t /tmp/location
Then restorecon:
sudo restorecon -vR /tmp/location
Just have a look at the mount options for /tmp directory, most probably you have no-exec option on it (there are many security reasons of doing that, the first being that anyone can put a file in the /tmp directory)
I'm running zsh on a Raspberry Pi 2 (Raspbian Jessie). zsh compinit is complaining about the /tmp directory being insecure. So, I checked the permissions on the directory:
$ compaudit
There are insecure directories:
/tmp
$ ls -ld /tmp
drwxrwxrwt 13 root root 16384 Apr 10 11:17 /tmp
Apparently anyone can do anything in the /tmp directory. Which makes sense, given it's purpose. So I tried the suggestions on this stackoverflow question. I also tried similar suggestions on other sites. Specifiacally, it suggests turning off group write permissions on that directory. Because of how the permissions looked according to ls -ld, I had to turn off the 'all' write permissions as well. So:
$ sudo su
% chmod g-w /tmp
% chmod a-w /tmp
% exit
$ compaudit
# nothing shows up, zsh is happy
This shut zsh up. However, other programs started to break. For example, gnome-terminal would crash whenever I typed the letter 'l'. Because of this, I had to turn the write permissions back on, and just run compinit -u in my .zshrc.
What I want to know: is there any better way to fix this? I'm not sure that it's a great idea to let compinit use an insecure directory. My dotfiles repo is hosted here, and the file where I now run compinit -u is here.
First, the original permissions on /tmp were correct. Make sure you've restored them correctly: ls -ld /tmp must start with drwxrwxrwt. You can use sudo chmod 1777 /tmp to set the correct permissions. /tmp is supposed to be writable by everyone, and any other permissions is highly likely to break stuff.
compaudit complains about directories in fpath, so one of the directories in your fpath is of the form /tmp/… (not necessarily /tmp itself). Check how fpath is being set. Normally the directories in fpath should be only subdirectories of the zsh installation directory, and places in your home directory. A subdirectory of /tmp wouldn't get in there without something unusual on your part.
If you can't find out where the stray directory is added to fpath, run zsh -x 2>zsh-x.log, and look for fpath in the trace file zsh-x.log.
It can be safe to use a directory under /tmp, but only if you created it securely. The permissions on /tmp allow anybody to create files, but users can only remove or rename their own files (that's what the t at the end of the permissions means). So if a directory is created safely (e.g. with mktemp -d), it's safe to use it in fpath. compaudit isn't sophisticated enough to recognize this case, and in any case it wouldn't have enough information since whether the directory is safe depends on how it was created.
On one of our remote systems mkdir -p $directory fails when the directory exists. which means it shows
mkdir: cannot create directory '$directory' : file exists
This is really puzzling, as I believed the contract of -p was that is always succeed when the directory already exists. And it works on the other systems I tried.
there is a user test on all of these systems, and directory=/home/test/tmp.
This could be caused if there is already a file by the same name located in the directory.
Note that a directory cannot contain both a file and folder by the same name on linux machines.
Check to see if there is a file (not a directory) with a name same as $directory.
mkdir -p won't create directory if there is a file with the same name is existing in the same directory. Otherwise it will work as expected.
Was your directory a FUSE-based network mount by any chance?
In addition to a file with that name already existing (other answer), this can happen when a FUSE process that once mounted something at this directory crashed (or was killed, e.g. with kill -9 or via the Linux OOM killer).
Check in mount if the FUSE mount is still listed there. If yes, you should be able to unmount it and fix the situation using fusermount -uz.
To see what is happening in detail, run strace -fy mkdir -p $directory, which shows all syscalls involved and their return values.
I consider the error messages emitted in this case a bug in mkdir -p (in particular the gnulib library):
When you run it on a dir that had a FUSE process mounted but that process crashed, it says
mkdir: cannot create directory ‘/mymount’: File exists
which is rather highly inaccurate, because the underlying stat() call returns ENOTCONN (Transport endpoint is not connected); but mkdir propagates up the less-specific error from the previous mkdir() sycall.
It's extra confusing because the man page says:
-p, --parents
no error if existing, make parent directories as needed
so it shouldn't error if the dir exists, yet ls -l / shows:
d????????? ? ? ? ? ? files
so according to this (d), it is a directory, but it isn't according to test -d.
I believe a better error message (which mkdir -p should emit in this case) would be:
mkdir: cannot create directory ‘/mymount’: Transport endpoint is not connected