What does the following command: chmod -r? - linux

What does this command do in a Linux terminal?
chmod -r /home/daria/photos/
I got this question because there was no error

chmod is a utility that is used to change the permissions of a file or directory. You can use ls -l /path/to/file command to observe the changes of chmod.
❯ echo "XYZ" > /tmp/abc # Create a new file named abc
❯ ls -l /tmp/abc # List the permissions of /tmp/abc
-rw-r--r-- 1 abdulkarim wheel 4B Apr 3 13:17 /tmp/abc
❯ cat /tmp/abc # Display the contents of the file
XYZ
❯ chmod -r /tmp/abc # remove read permissions for User, Group and Others
❯ ls -l /tmp/abc # Notice the read perms are gone
--w------- 1 abdulkarim wheel 4B Apr 3 13:17 /tmp/abc
❯ cat /tmp/abc # We can no longer cat the file!
cat: /tmp/abc: Permission denied
So, the command chmod -r /path/to/file will revoke the read permissions for everyone. Similarly chmod +r will grant read permission to everyone.
The man page for chmod does not explain this, making it difficult for some users but once you know this, you cannot un-know this :)

Related

Why cannot root writes on a file that it owns and has write access to?

I need to write to a.txt. The file is owned by root with a read-write access. But still I cannot write over it with a sudo. Why?
% ls -l
total 8
-rw-r--r-- 1 root staff 6 Mar 24 00:30 a.txt
% sudo echo "hi" >> a.txt
zsh: permission denied: a.txt
The redirection happens before the commands are run, i.e. using the original user.
Work-around:
sudo sh -c 'echo "hi" >> a.txt'

Why file is not created as owned by a specific user i designated

I have a php script that will collection data and write log into a file, the directory belongs to an user called 'ingestion' and a group called 'ingestion'. I was using the command
sudo -u ingestion php [script] &>> /var/log/FOLDER/adapter.log
The owner and group of FOLDER is ingestion. However, the created adapter.log still belongs to root user and root group, how is this possible?
Your file is created by the bash running as root, not by the process that you run via sudo as ingestion.
That's because the >> foo is part of the command line, not of the process started by sudo.
Here:
#foo.sh
echo foo: `id -u`
Then:
tmp root# sudo -u peter bash foo.sh > foo
tmp root# ls -l foo
-rw------- 1 root staff 9 Mar 2 18:52 foo
tmp root# cat foo
foo: 501
You can see that the file is created as root but the foo.sh script is run as uid 501.
You can fix this by running e.g.:
tmp root# sudo -u peter bash -c "bash foo.sh > foo"
tmp root# ls -l foo
-rw------- 1 peter staff 9 Mar 2 18:54 foo
tmp root# cat foo
foo: 501
In your case, of course, replace "..." with your php command.

Running cron job every month and results from the command need to be in a file

I am trying cron job for the first time. I am trying to generate file which will contain user installed application in Ubuntu and the same file needs to be uploaded to server.
I am unable to generate the text file with that information. Below is the command which i am trying.
Script file which needs to be run for the cron job /tmp/aptlist.sh
#!/bin/bash
comm -23 <(apt-mark showmanual | sort -u) <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort -u) &> /tmp/$(hostname)-$(date +%d-%m-%Y-%S)
cron has following entry done using crontab -e
:~$ crontab -l
0 0 1 * * /tmp/aptlist.sh > /dev/null 2>&1
syslog has following entry however no file is generated
Oct 21 14:09:01 Astrome46 CRON[14592]: (user) CMD (/tmp/aptlist.sh > /dev/null 2>&1)
Oct 21 14:10:01 Astrome46 CRON[14600]: (user) CMD (/tmp/aptlist.sh > /dev/null 2>&1)
Kindly let me know how to fix the issue.
Thank You
Try this:
0 0 1 * * bash /tmp/aptlist.sh > /dev/null 2>&1
If this works then I suspect it is because the file doesn't have executable permissions.
You can find that out by typing in the terminal:
ls -l /tmp/aptlist.sh.
If that is really the case then you can also edit the file permissions to allow it to run by typing:
chmod u+x /tmp/aptlist.sh
This will enable the file owner to run it, but will not allow that to other users. If you need it to run for a different user do:
chmod a+x /tmp/aptlist.sh
Good luck!

"echo "password" | sudo -S <command>" asks for password

I trying run a script without become the su user and I use this command for this:
echo "password" | sudo -S <command>
If I use this command for "scp", "mv", "whoami" commands, the command works very well but when I use for "chmod", the command asks for password for my user. I don't enter password and the command works. My problem is the system asks password to me. I don't want the system asks for password.
Problem ss is like this:
[myLocalUser#myServer test-dir]$ ls -lt
total 24
--wx-wx-wx 1 root root 1397 May 26 12:12 file1
--wx-wx-wx 1 root root 867 May 26 12:12 script1
--wx-wx-wx 1 root root 8293 May 26 12:12 file2
--wx-wx-wx 1 root root 2521 May 26 12:12 file3
[myLocalUser#myServer test-dir]$ echo "myPassw0rd" | sudo -S chmod 111 /tmp/test-dir/*
[sudo] password for myLocalUser: I DONT WANT ASK FOR PASSWORD
[myLocalUser#myServer test-dir]$ ls -lt
total 24
---x--x--x 1 root root 1397 May 26 12:12 file1
---x--x--x 1 root root 867 May 26 12:12 script1
---x--x--x 1 root root 8293 May 26 12:12 file2
---x--x--x 1 root root 2521 May 26 12:12 file3
You can use the sudoers file, located in /etc/sudoers, to allow specific users execute commands as root without password.
myLocalUser ALL=(ALL) NOPASSWD: /bin/chmod
With this line the user myLocalUser can execute chmod as root without a password is needed.
But this also breaks parts of the system security, so be aware not allow too much and fence the task as much as possible.
sudoers information
sudo -S prints prompt to stderr.
If you don't want to see it, redirect stderr to /dev/null
The following command redirects stderr at the local host:
echo <password> | ssh <server> sudo -S ls 2>/dev/null
It is equivalent to echo <password> | ssh <server> "sudo -S ls" 2>/dev/null
The following command redirects stderr at the remote server:
echo <password> | ssh <server> "sudo -S ls 2>/dev/null"
If you need to keep stderr, but hide [sudo] password for ... then you can use process substitution on the local or remote machine. Since sudo prompt has no newline, I use sed to cut out the sudo prompt. I do this to save the first line of stderr of the created process.
# local filtering
echo <password> | ssh <server> "sudo -S ls" 2> >(sed -e 's/^.sudo[^:]\+: //')
#remote filtering
echo <password> | ssh <server> "sudo -S ls 2> >(sed -e 's/^.sudo[^:]\+: //')"

rsync prints "skipping non-regular file" for what appears to be a regular directory

I back up my files using rsync. Right after a sync, I ran it expecting to see nothing, but instead it looked like it was skipping directories. I've (obviously) changed names, but I believe I've still captured all the information I could. What's happening here?
$ ls -l /source/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ ls -l /destination/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ file /source/backup/myfiles/foo
/source/backup/myfiles/foo/: directory
Then I sync (expecting no changes):
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
And here's the weird part:
$ echo 'hi' > /source/backup/myfiles/foo/test
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
backup/myfiles/foo
backup/myfiles/foo/test
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
So it worked:
$ ls -l /source/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
$ ls -l /destination/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
but still:
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
Other notes:
My actual directories "foo" and "bar" do have spaces, but no other strange characters. Other directories have spaces and have no problem. I 'stat'-ed and saw no differences between the directories that don't rsync and the ones that do.
If you need more information, just ask.
Are you absolutely sure those individual files are not symbolic links?
Rsync has a few useful flags such as -l which will "copy symlinks as symlinks". Adding -l to your command:
rsync -rtvpl /source/backup /destination
I believe symlinks are skipped by default because they can be a security risk. Check the man page or --help for more info on this:
rsync --help | grep link
To verify these are symbolic links or pro-actively to find symbolic links you can use file or find:
$ file /path/to/file
/path/to/file: symbolic link to `/path/file`
$ find /path -type l
/path/to/file
Are you absolutely sure that it's not a symbolic link directory?
try a:
file /source/backup/myfiles/foo
to make sure it's a directory
Also, it could very well be a loopback mount
try
mount
and make sure that /source/backup/myfiles/foo is not listed.
You should try the below command, most probably it will work for you:
rsync -ravz /source/backup /destination
You can try the following, it will work
rsync -rtvp /source/backup /destination
I personally always use this syntax in my script and works a treat to backup the entire system (skipping sys/* & proc/* nfs4/*)
sudo rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
Here is my script run by root's cron daily:
#!/bin/bash
#
NFS="/nfs4"
HOSTNAME=`hostname`
TIMESTAMP=`date "+%Y%m%d_%H%M%S"`
EXCLUDE="/home/gcclinux/Backups/root-rsync.excludes"
TARGET="${NFS}/${HOSTNAME}/SYS"
LOGDIR="${NFS}/${HOSTNAME}/SYS-LOG"
CMD=`/usr/bin/stat -f -L -c %T ${NFS}`
## CHECK IF NFS IS MOUNTED...
if [[ ! $CMD == "nfs" ]];then
echo "NFS NOT MOUNTED"
exit 1
fi
## CHECK IF LOG DIRECTORY EXIST
if [ ! -d "$LOGDIR" ]; then
/bin/mkdir -p $LOGDIR
fi
## CREATE LOG HEADER
LOG=$LOGDIR/"rsync_result."$TIMESTAMP".txt"
echo "-------------------------------------------------------" | tee -a $LOG
echo `date` | tee -a $LOG
echo "" | tee -a $LOG
## START RUNNING BACKUP
/usr/bin/rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
In some cases just copy file to another location (like home) then try again

Resources