I have a folder with the SGID bit on:
lucas#arturito:/home$ ls -l | grep share
drwxrwsr-x 11 share sambashare 4096 May 5 14:54 share
If I move into share an I create a folder within it, that folder will have the group 'sambashare'. So far, so good...
lucas#arturito:/home$ cd share/
lucas#arturito:/home/share$ mkdir test
lucas#arturito:/home/share$ ls -l | grep test
drwxrwsr-x 2 lucas sambashare 4096 May 5 15:07 test
Now, if I move under /home/share/test and create a new folder, that new folder inherits the group: SGID is working.
lucas#arturito:/home/share$ cd test
lucas#arturito:/home/share/test$ mkdir test1
lucas#arturito:/home/share/test$ ls -l | grep test1
drwxrwsr-x 2 lucas sambashare 4096 May 5 15:09 test1
However, under /home/share I do have other folders other than the newly created 'test'. If I move under any of those, and create a new folder (say 'test2'), that new folder will ignore the SGID and the group will be my group.
lucas#arturito:/home/share$ ls -l | grep 99
drwxrwxr-x 9 share sambashare 4096 May 5 15:11 99_varios
lucas#arturito:/home/share$ cd 99_varios/
lucas#arturito:/home/share/99_varios$ mkdir test2
lucas#arturito:/home/share/99_varios$ ls -l | grep test2
drwxrwxr-x 2 lucas lucas 4096 May 5 15:11 test2
Why is that happening? Isn't it enough for /home/share to have g+s for any other directory below it (new or old) to inherit /home/share's group?
I'm lost. Any hint or idea will be highly appreciated!
Thanks!
Lucas
New folders will inherit the bit but existing ones need to have it set explicitly. You can run the command below once to recursively set it on any existing subfolders.
find /home/share -type d -exec chmod g+s '{}' \;
Related
I have a folder with a .tmux.conf file under source control, and I would like to copy that file over to ~. Here is an ls of that:
ubuntu#ip-172-180:~$ ls -alh .vim/others
total 12K
drwxrwxr-x 2 ubuntu ubuntu 4.0K May 2 19:05 .
drwxrwxr-x 6 ubuntu ubuntu 4.0K May 2 19:05 ..
-rw-rw-r-- 1 ubuntu ubuntu 706 May 2 19:05 .tmux.conf
However, when I do ls on that directory, I get nothing:
ubuntu#ip-172-30-1-180:~$ ls .vim/others/*
ls: cannot access '.vim/others/*': No such file or directory
Same with cp:
ubuntu#ip-172-30-1-180:~$ cp .vim/others/* .
cp: cannot stat '.vim/others/*': No such file or directory
Is there some additional parameter I have to add when copying over dot files?
check this command
ls -ld .[!.]*
ls -ld .vim/others/[!.]*
The program I'm running inside the Docker image, first creates a directory and writes some file into the directory.
To transfer the directory onto the host machine, I've mounted a datadir/ and then moved the directory created inside the image into the mounted directory, e.g.:
mkdir datadir
DATADIR=datadir/
docker run -i \
-v $(pwd)/$DATADIR:/$DATADIR/ ubuntu \
bash -c "mkdir /x1 && echo 'abc' > x1/test.txt && mv x1 $DATADIR"
But when I tried to access datadir/x1, it has root as the owner and it comes with read-only permissions:
$ mv datadir/x1/ .
mv: cannot move 'datadir/x1/' to './x1': Permission denied
$ ls -lah datadir/x1/
total 12K
drwxr-xr-x 2 root root 4.0K Jun 28 16:38 .
drwxrwxr-x 3 alvas alvas 4.0K Jun 28 16:38 ..
-rw-r--r-- 1 root root 4 Jun 28 16:38 test.txt
Is mounting the additional volume and copying the created directory inside the image the right approach to move files between the Docker image and the host machine? If not, what's the "canonical" way to perform the same operation?
About the directory permissions, what should be the correct way to assign the host machine permission to any files inside the mounted volume?
I've tried to chmod -R 777 inside the Docker image but I don't think that's the safe approach, i.e.:
$ docker run -i -v $(pwd)/$DATADIR:/$DATADIR/ -i ubuntu bash -c "mkdir /x1 && echo 'abc' > x1/test.txt && mv x1 $DATADIR && chmod -R 777 $DATADIR"
$ mv datadir/x1/ .
$ ls -lah x1
total 12K
drwxrwxrwx 2 root root 4.0K Jun 28 16:47 .
drwxrwxr-x 12 alvas alvas 4.0K Jun 28 16:47 ..
-rwxrwxrwx 1 root root 4 Jun 28 16:47 test.txt
To avoid permission issues use docker cp
For example:
# This is the directory you want to save the outputs
mkdir datadir
# We create a directory and file inside it, inside the Docker image.
# And we are naming the Docker image "thisinstance"
docker run -i --name thisinstance ubuntu \
bash -c "mkdir /x1 && echo 'abc' > x1/test.txt"
# Copies the new directory inside the Docker image to the host.
docker cp thisinstance:/x1 datadir/
# Destroy the temporary container
docker rm thisinstance
# Check the ownership of the directory and file
ls -lah datadir/x1/
[out]:
drwxr-xr-x 3 alvas 679754705 102B Jun 29 10:36 ./
drwxr-xr-x 3 alvas 679754705 102B Jun 29 10:36 ../
-rw-r--r-- 1 alvas 679754705 4B Jun 29 10:36 test.t
I need to combine these to commands in order to have a sorted list by date created with the specified "filename".
I know that sorting files by date can be achieved with:
ls -lrt
and finding a file by name with
find . -name "filename*"
I don't know how to combine these two. I tried with a pipeline but I don't get the right result.
[EDIT]
Not sorted
find . -name "filename" -printf '%TY:%Tm:%Td %TH:%Tm %h/%f\n' | sort
Forget xargs. "Find" and "sort" are all the tools you need.
My best guess would be to use xargs:
find . -name 'filename*' -print0 | xargs -0 /bin/ls -ltr
There's an upper limit on the number of arguments, but it shouldn't be a problem unless they occupy more than 32kB (read more here), in which case you will get blocks of sorted files :)
find . -name "filename" -exec ls --full-time \{\} \; | cut -d' ' -f7- | sort
You might have to adjust the cut command depending on what your version of ls outputs.
Check the below-shared command:
1) List Files directory with Last Modified Date/Time
To list files and shows the last modified files at top, we will use -lt options with ls command.
$ ls -lt /run
output
total 24
-rw-rw-r--. 1 root utmp 2304 Sep 8 14:58 utmp
-rw-r--r--. 1 root root 4 Sep 8 12:41 dhclient-eth0.pid
drwxr-xr-x. 4 root root 100 Sep 8 03:31 lock
drwxr-xr-x. 3 root root 60 Sep 7 23:11 user
drwxr-xr-x. 7 root root 160 Aug 26 14:59 udev
drwxr-xr-x. 2 root root 60 Aug 21 13:18 tuned
https://linoxide.com/linux-how-to/how-sort-files-date-using-ls-command-linux/
I have an archive, which is archived by someone else, and I want to automatically, after I download it, to change a branch of the file system within the extracted files to gain read access. (I can't change how archive is created).
I've looked into this thread: chmod: How to recursively add execute permissions only to files which already have execute permission as into some others, but no joy.
The directories originally come with multiple but all wrong flags, they may appear as:
drwx------
d---r-x---
drwxrwxr-x
dr--r-xr--
Those are just the few I've discovered so far, but could be more.
find errors when tries to look into a directory with no x permission, and so doesn't pass it to chmod. What I've been doing so far, is manually change permissions on the parent directory, then go into the child directories and do the same for them and so on. But this is a lot of hand labour. Isn't there some way to do this automatically?
I.e. how I am doing it now:
do:
$ chmod -R +x
$ chmod -R +r
until I get no errors, then
$ find -type f -exec chmod -x {} +
But there must be a better way.
You can use chmod with the X mode letter (the capital X) to set the executable flag only for directories.
In the example below the executable flag is cleared and then set for all directories recursively:
~$ mkdir foo
~$ mkdir foo/bar
~$ mkdir foo/baz
~$ touch foo/x
~$ touch foo/y
~$ chmod -R go-X foo
~$ ls -l foo
total 8
drwxrw-r-- 2 wq wq 4096 Nov 14 15:31 bar
drwxrw-r-- 2 wq wq 4096 Nov 14 15:31 baz
-rw-rw-r-- 1 wq wq 0 Nov 14 15:31 x
-rw-rw-r-- 1 wq wq 0 Nov 14 15:31 y
~$ chmod -R go+X foo
~$ ls -l foo
total 8
drwxrwxr-x 2 wq wq 4096 Nov 14 15:31 bar
drwxrwxr-x 2 wq wq 4096 Nov 14 15:31 baz
-rw-rw-r-- 1 wq wq 0 Nov 14 15:31 x
-rw-rw-r-- 1 wq wq 0 Nov 14 15:31 y
A bit of explaination:
chmod -x foo - clear the eXecutable flag for foo
chmod +x foo - set the eXecutable flag for foo
chmod go+x foo - same as above, but set the flag only for Group and Other users, don't touch the User (owner) permission
chmod go+X foo - same as above, but apply only to directories, don't touch files
chmod -R go+X foo - same as above, but do this Recursively for all subdirectories of foo
You need read access, in addition to execute access, to list a directory. If you only have execute access, then you can find out the names of entries in the directory, but no other information (not even types, so you don't know which of the entries are subdirectories). This works for me:
find . -type d -exec chmod +rx {} \;
Try to change all the persmissions at the same time:
chmod -R +xr
To make everything writable by the owner, read/execute by the group, and world executable:
chmod -R 0755
To make everything wide open:
chmod -R 0777
Adding executable permissions, recursively, to all files (not folders) within the current folder with sh extension:
find . -name '*.sh' -type f | xargs chmod +x
* Notice the pipe (|)
Give 0777 to all files and directories starting from the current path :
chmod -R 0777 ./
I have a folder that contains versions of my application, each time I upload a new version a new sub-folder is created for it, the sub-folder name is the current timestamp, here is a printout of the main folder used (ls -l |grep ^d):
drwxrwxr-x 7 root root 4096 2011-03-31 16:18 20110331161649
drwxrwxr-x 7 root root 4096 2011-03-31 16:21 20110331161914
drwxrwxr-x 7 root root 4096 2011-03-31 16:53 20110331165035
drwxrwxr-x 7 root root 4096 2011-03-31 16:59 20110331165712
drwxrwxr-x 7 root root 4096 2011-04-03 20:18 20110403201607
drwxrwxr-x 7 root root 4096 2011-04-03 20:38 20110403203613
drwxrwxr-x 7 root root 4096 2011-04-04 14:39 20110405143725
drwxrwxr-x 7 root root 4096 2011-04-06 15:24 20110406151805
drwxrwxr-x 7 root root 4096 2011-04-06 15:36 20110406153157
drwxrwxr-x 7 root root 4096 2011-04-06 16:02 20110406155913
drwxrwxr-x 7 root root 4096 2011-04-10 21:10 20110410210928
drwxrwxr-x 7 root root 4096 2011-04-10 21:50 20110410214939
drwxrwxr-x 7 root root 4096 2011-04-10 22:15 20110410221414
drwxrwxr-x 7 root root 4096 2011-04-11 22:19 20110411221810
drwxrwxr-x 7 root root 4096 2011-05-01 21:30 20110501212953
drwxrwxr-x 7 root root 4096 2011-05-01 23:02 20110501230121
drwxrwxr-x 7 root root 4096 2011-05-03 21:57 20110503215252
drwxrwxr-x 7 root root 4096 2011-05-06 16:17 20110506161546
drwxrwxr-x 7 root root 4096 2011-05-11 10:00 20110511095709
drwxrwxr-x 7 root root 4096 2011-05-11 10:13 20110511100938
drwxrwxr-x 7 root root 4096 2011-05-12 14:34 20110512143143
drwxrwxr-x 7 root root 4096 2011-05-13 22:13 20110513220824
drwxrwxr-x 7 root root 4096 2011-05-14 22:26 20110514222548
drwxrwxr-x 7 root root 4096 2011-05-14 23:03 20110514230258
I'm looking for a command that will leave the last 10 versions (sub-folders) and deletes the rest.
Any thoughts?
There you go. (edited)
ls -dt */ | tail -n +11 | xargs rm -rf
First list directories recently modified then take all of them except first 10, then send them to rm -rf.
ls -dt1 /path/to/folder/*/ | sed '11,$p' | rm -r
this assumes those are the only directories and no others are present in the working directory.
ls -dt1 will normally only print the newest directory however the /*/ will
only match directories and print their full paths the 1 ensures one
line per match/listing t sorts time with newest at the top.
sed takes the 11th line on down to the bottom and prints only those lines, which are then passed to rm.
You can use xargs, but for testing you may wish to remove | rm -r to see if the directories are listed properly first.
If the directories' names contain the date one can delete all but the last 10 directories with the default alphabetical sort
ls -d */ | head -n -10 | xargs rm -rf
ls -lt | grep ^d | sed -e '1,10d' | awk '{sub(/.* /, ""); print }' | xargs rm -rf
Explanation:
list all contents of current directory in chronological order (most recent files first)
filter out all the directories
ignore the 10 first lines / directories
use awk to extract the file names from the remaining 'ls -l' output
remove the files
EDIT:
find . -maxdepth 1 -type d ! -name \\.| sort | tac | sed -e '1,10d' | xargs rm -rf
I suggest the following sequence. I use a similar approach on my Synology NAS to delete old backups. It doesn't rely on the folder names, instead it uses the last modified time to decide which folders to delete. It also uses zero-termination in order to correctly handle quotes, spaces and newline characters in the folder names:
find /path/to/folder -maxdepth 1 -mindepth 1 -type d -printf '%Ts\t' -print0 \
| sort -rnz \
| tail -n +11 -z \
| cut -f2- -z \
| xargs -0 -r rm -rf
IMPORTANT: This will delete any matching folders! I strongly recommend doing a test run first by replacing the last command xargs -0 -r rm -rf with xargs -0 which will echo the matching folders instead of deleting them.
A short explanation of each step:
find /path/to/folder -maxdepth 1 -mindepth 1 -type d -printf '%Ts\t' -print0
Find all directories (-type d) directly inside the backup folder (-maxdepth 1) except the backup folder itself (-mindepth 1), print (-printf) the Unix time (%Ts) of the last modification followed by a tab character (\t, used in step 4) and the full file name followed by a null character (-print0).
sort -rnz
Sort the zero-terminated items (-z) from the previous step using a numerical comparison (-n) and reverse the order (-r). The result is a list of all folders sorted by their last modification time in descending order.
tail -n +11 -z
Print the last lines (tail) from the previous step starting from line 11 (-n +11) considering each line as zero-terminated (-z). This excludes the newest 10 folders (by modification time) from the remaining steps.
cut -f2- -z
Cut each line from the second field until the end (-f2-) treating each line as zero-terminaded (-z) to obtain a list containing the full path to each folder older than 10 days.
xargs -r -0 rm -rf
Take the zero-terminated (-0) items from the previous step (xargs), and, if there are any (-r avoids running the command passed to xargs if there are no nonblank characters), force delete (rm -rf) them.
Your directory names are sorted in chronological order, which makes this easy. The list of directories in chronological order is just *, or [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] to be more precise. So you want to delete all but the last 10 of them.
set [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]/
while [ $# -gt 10 ]; do
rm -rf "$1"
shift
fi
(While there are more than 10 directories left, delete the oldest one.)