change permissions on a not writable - linux

UPDATE:
I moved my question to ask ubuntu community, but can not delete it from here... if you have an awenser, please share it on ubuntu community not here... Thanks
i want to make an change on a file but i cant do that because i have not correct permissions:
➜ ls -l pycharm64.vmoptions
-rw-r--r-- 1 root root 427 Dec 28 18:33 pycharm64.vmoptions
i tried to change permisions by these two command:
sudo chmod a+w pycharm64.vmoptions
and
sudo chown user:user pycharm64.vmoptions
but in i get an erro both time:
Read-only file system
how can i make an change on my file? (honestly i dont care about the owner and groups of the file... i just want to change my file anyway)
P.S: my OS is UBUNTU

You can change a file on read only by setting the "immutable property"
chattr +i [fileName]
If you want to revert it just change the "+" for a "-"
chattr -i [fileName]

Your filesystem could be mounted as read only. You have to change first before you can write anything to it. Changing file permissions also requires writing on the filesystem.
You may be able to mount it as read write with command like:
sudo mount -o remount,rw /dev/foo /mount/destination/dir
In this command you spesify that you want to remount the filesystem with different options, adding the readwrite, rw capability.
If you successd in changing the filesystem to read write, then you should be able to change to file permissions with the commands you tried earlier.

You can`t edit it directly (I'm not sure about Windows).
You should edit custom settings file instead:
Manually
nano ~/.config/JetBrains/PyCharm2022.3/pycharm64.vmoptions
or from IDE -- https://intellij-support.jetbrains.com/hc/en-us/articles/206544869.

Related

reading jar file error

enter image description here
I was trying to run tomcat and I think these red files are causing problems.
does anyone know why I cant read .jar files?
Do a sudo ls -la in the same directory to check for permissions of these files.
Maybe you don't have proper read permissions for those files in red. To give read access use command like this sudo chmod ug+r bootstrap.jar. To give write access too sudo chmod ug+rw bootstrap.jar

path /tmp does not correspond to a regular file

this happens when I have
an executable that is in the /tmp directory (say /tmp/a.out)
it is run by a root shell
linux
selinux on (default for RedHat, CentOS, etc)
Apparently trying to run an executable that sits in the /tmp/directory as root revokes the privileges. Any idea how to go around this issue, other than turning off selinux? Thanks
You can set file context on binary or directory (containing binary) that are in /tmp that you want to run.
sudo semanage fcontext -a -t bin_t /tmp/location
Then restorecon:
sudo restorecon -vR /tmp/location
Just have a look at the mount options for /tmp directory, most probably you have no-exec option on it (there are many security reasons of doing that, the first being that anyone can put a file in the /tmp directory)

zsh compinit: insecure directories. Compaudit shows /tmp directory

I'm running zsh on a Raspberry Pi 2 (Raspbian Jessie). zsh compinit is complaining about the /tmp directory being insecure. So, I checked the permissions on the directory:
$ compaudit
There are insecure directories:
/tmp
$ ls -ld /tmp
drwxrwxrwt 13 root root 16384 Apr 10 11:17 /tmp
Apparently anyone can do anything in the /tmp directory. Which makes sense, given it's purpose. So I tried the suggestions on this stackoverflow question. I also tried similar suggestions on other sites. Specifiacally, it suggests turning off group write permissions on that directory. Because of how the permissions looked according to ls -ld, I had to turn off the 'all' write permissions as well. So:
$ sudo su
% chmod g-w /tmp
% chmod a-w /tmp
% exit
$ compaudit
# nothing shows up, zsh is happy
This shut zsh up. However, other programs started to break. For example, gnome-terminal would crash whenever I typed the letter 'l'. Because of this, I had to turn the write permissions back on, and just run compinit -u in my .zshrc.
What I want to know: is there any better way to fix this? I'm not sure that it's a great idea to let compinit use an insecure directory. My dotfiles repo is hosted here, and the file where I now run compinit -u is here.
First, the original permissions on /tmp were correct. Make sure you've restored them correctly: ls -ld /tmp must start with drwxrwxrwt. You can use sudo chmod 1777 /tmp to set the correct permissions. /tmp is supposed to be writable by everyone, and any other permissions is highly likely to break stuff.
compaudit complains about directories in fpath, so one of the directories in your fpath is of the form /tmp/… (not necessarily /tmp itself). Check how fpath is being set. Normally the directories in fpath should be only subdirectories of the zsh installation directory, and places in your home directory. A subdirectory of /tmp wouldn't get in there without something unusual on your part.
If you can't find out where the stray directory is added to fpath, run zsh -x 2>zsh-x.log, and look for fpath in the trace file zsh-x.log.
It can be safe to use a directory under /tmp, but only if you created it securely. The permissions on /tmp allow anybody to create files, but users can only remove or rename their own files (that's what the t at the end of the permissions means). So if a directory is created safely (e.g. with mktemp -d), it's safe to use it in fpath. compaudit isn't sophisticated enough to recognize this case, and in any case it wouldn't have enough information since whether the directory is safe depends on how it was created.

How come my Apache can only access root owned files?

Running apache on centos 6.4 and my web server can't see any files unless the root user creates or copies them.
ps aux | grep apache shows that apache is running as apache user, not root.
I tried chown apache:apache on the files.
I even set chmod 777 on the files.
-rwxrwxrwx. 1 apache apache 2300 May 15 17:46 example.php
I still get an http 500 error, what else could be wrong?
also even if I chown the file to root:root, it will not work, I need to actually cp file.php file.php as root before it will work. I don't get it!
chcon -t httpd_sys_content_t example.php gets me there! - thanks Chris.
Does this mean I need to change my FTP user's Security Context settings so they can upload files like this or do I need to change a rule in SELinux to allow a wider range of files to execute?
SELinux might here be a problem.
Please do ls -lZ example.php
To rule out SELinux you can:
getenforce
then
setenforce 0
And try accessing this file again...
That will temporarily put SELinux in permissive mode.
You might have to change a context of the file! Let us know how it goes and we will take it from there.
Update:
As expected, SELinux was stopping apache from accessing that file. If you trust this file, you can change it's context:
chcon -v --type=httpd_sys_content_t example.php
If there is more than one file, you could use -R flag, so:
chcon -vR --type=httpd_sys_content_t /html/
As you have noticed, with ls you have -Z flag to show SELlinux context. You can try using this flag with other programs like ps for example.
To troubleshoot SELinux problems I recommend sealert - part of setroubleshoot-server.
How did I know that you are most likely using SELinux? Your filesystem is labeled.
How did i know that your fs is labeled? -rwxrwxrwx**.** - dot at the end of permissions tells that fs is labeled.
Don't forget to change the permissions! You really don't want 777...
Hope that helps.
If you have enabled suphp then files with 777 permissions will not work fine and give 500 error, change the permissions 644.
Also check error log for the same if you are still facing same issue.
Why are you trying 'cp file.php file.php' with same name, to copy use other name as below or copy to another location where file.php not exists.
cp file.php file.php-bak
or
cp file.php another-dir/file.php

rsync - mkstemp failed: Permission denied (13) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following setup to periodically rsync files from server A to server B. Server B has the rsync daemon running with the following configuration:
read only = false
use chroot = false
max connections = 4
syslog facility = local5
log file = /var/adm/rsyncd.log
munge symlinks = false
secrets file = /etc/rsyncd.secrets
numeric ids = false
transfer logging = true
log format = %h %o %f %l %b
[BACKUP]
path = /path/to/archive
auth users = someuser
From server A I am issuing the following command:
rsync -adzPvO --delete --password-file=/path/to/pwd/file/pwd.dat /dir/to/be/backedup/ someuser#192.168.100.100::BACKUP
BACKUP directory is fully read/write/execute to everyone. When I run the rsync command from server A, I see:
afile.txt
989 100% 2.60kB/s 0:00:00 (xfer#78, to-check=0/79)
for each and everyfile in the directory I wish to backup. It fails when I get to writing tmp files:
rsync: mkstemp "/.afile.txt.PZQvTe" (in BACKUP) failed: Permission denied (13)
Hours of googling later and I still can't resolve what seems to be a very simple permission issue. Advice? Thanks in advance.
Additional Information
I just noticed the following occurs at the beginning of the process:
rsync: failed to set permissions on "/." (in BACKUP): Permission denied (13)
Is it trying to set permission on "/"?
Edit
I am logged in as the user - someuser. My destination directory has full read/write/execute permission for everyone, including it's contents. In addition, the destination directory is owned by someuser and in someuser's group.
Follow up
I've found using SSH solves this
Make sure the user you're rsync'd into on the remote machine has write access to the contents of the folder AND the folder itself, as rsync tried to update the modification time on the folder itself.
Even though you got this working, I recently had a similar encounter and no SO or Google searching was of any help as they all dealt with basic permission issues wheres the solution below is somewhat of an off setting that you wouldn't even think to check in most situations.
One thing to check for with permission denied that I recently found having issues with rsync myself where permissions were exactly the same on both servers including the owner and group but rsync transfers worked one way on one server but not the other way.
It turned out the server with problems that I was getting permission denied from had SELinux enabled which in turn overrides POSIX permissions on files/folders. So even though the folder in question could have been 777 with root running, the command SELinux was enabled and would in turn overwrite those permissions which produced a "permission denied"-error from rsync.
You can run the command getenforce to see if SELinux is enabled on the machine.
In my situation I ended up just disabling SELINUX completely because it wasn't needed and already disabled on the server that was working fine and just caused problems being enabled. To disable, open /etc/selinux/config and set SELINUX=disabled. To temporarily disable you can run the command setenforce 0 which will set SELinux into a permissive state rather then enforcing state which causes it to print warnings instead of enforcing.
Rsync daemon by default uses nobody/nogroup for all modules if it is running under root user. So you either need to define params uid and gid to the user you want, or set them to root/root.
I encountered the same problem and solved it by chown the user of the destination folder. The current user does not have the permission to read, write and execute the destination folder files. Try adding the permission by chmod a+rwx <folder/file name>.
This might not suit everyone since it does not preserve the original file permissions but in my case it was not important and it solved the problem for me. rsync has an option --chmod:
--chmod This option tells rsync to apply one or more comma-separated lqchmodrq strings to the permission of the files in the transfer. The
resulting value is treated as though it was the permissions that the
sending side supplied for the file, which means that this option can
seem to have no effect on existing files if --perms is not enabled.
This forces the permissions to be what you want on all files/directories. For example:
rsync -av --chmod=Du+rwx SRC DST
would add Read, Write and Execute for the user to all transferred directories.
I had a similar issue, but in my case it was because storage has only SFTP, without ssh or rsync daemons on it. I could not change anything, bcs this server was provided by my customer.
rsync could not change the date and time for the file, some other utilites (like csync) showed me other errors: "Unable to create temporary file Clock skew detected".
If you have access to the storage-server - just install openssh-server or launch rsync as a daemon here.
In my case - I could not do this and solution was: lftp.
lftp's usage for syncronization is below:
lftp -c "open -u login,password sftp://sft.domain.tld/; mirror -c --verbose=9 -e -R -L /srs/folder /rem/folder"
/src/folder - is the folder on my PC, /rem/folder - is sftp://sft.domain.tld/rem/folder.
you may find mans by the link lftp.yar.ru/lftp-man.html
Windows: Check permissions of destination folders. Take ownership if you must to give rights to the account running the rsync service.
I had the same issue in case of CentOS 7. I went through lot of articles ,forums but couldnt find out the solution.
The problem was with SElinux. Disabling SElinux at the server end worked.
Check SELinux status at the server end (from where you are pulling data using rysnc)
Commands to check SELinux status and disable it
$getenforce
Enforcing ## this means SElinux is enabled
$setenforce 0
$getenforce
Permissive
Now try running rsync command at the client end ,it worked for me.
All the best!
I have Centos 7 server with rsyncd on board:
/etc/rsyncd.conf
[files]
path = /files
By default selinux blocks access for rsyncd to /files folder
# this sets needed context to my /files folder
sudo semanage fcontext -a -t rsync_data_t '/files(/.*)?'
sudo restorecon -Rv '/files'
# sets needed booleans
sudo setsebool -P rsync_client 1
Disabling selinux is an easy but not a good solution
I had the same issue, so I first SSH into the server to confirm that I able to log in to the server by using the command:
ssh -i /Users/Desktop/mypemfile.pem user#ec2.compute-1.amazonaws.com
Then in New Terminal
I copied a small file to the server by using SCP, to make sure I am able to make a connection:
scp -i /Users/Desktop/mypemfile.pem /Users/Desktop/test.file user#ec2.compute-1.amazonaws.com:/home/user/test/
Then In the same new terminal, I tried running rsync:
rsync -avz -e "ssh -i /Users/Desktop/mypemfile.pem" /Users/Desktop/backup/image.img.gz user#ec2.compute-1.amazonaws.com:
If you're on a Raspberry pi or other Unix systems with sudo you need to tell the remote machine where rsync and sudo programs are located.
I put in the full path to be safe.
Here's my example:
rsync --stats -paogtrh --progress --omit-dir-times --delete --rsync-path='/usr/bin/sudo /usr/bin/rsync' /mnt/drive0/ pi#192.168.10.238:/mnt/drive0/
I imagine a common error not currently mentioned above is trying to write to a mount space (e.g., /media/drivename) when the partition isn't mounted. That will produce this error as well.
If it's an encrypted drive set to auto-mount but doesn't, might be an issue of auto-unlocking the encrypted partition before attempting to write to the space where it is supposed to be mounted.
I had the same error while syncing files inside of a Docker container and the destination was a mounted volume (Docker for mac), I run rsync via su-exec <user>. I was able to resolve it by running rsync as root with -og flags (keep owner and group for destination files).
I'm still not sure what caused that issue, the destination permissions were OK (I run chown -R <user> for destination dir before rsync), perhaps somehow related to Docker for Mac slow filesystem.
Take attention on -e ssh and jenkins#localhost: in next example:
rsync -r -e ssh --chown=jenkins:admin --exclude .git --exclude Jenkinsfile --delete ./ jenkins#localhost:/home/admin/web/xxx/public
That helped me
P.S. Today, i realized that when you change (add) jenkins user to some group, permission will apply after slave (agent) restart. And my solution (-e ssh and jenkins#localhost:) need only when you can't restart agent/server.
Yet still another way to get this symptom: I was rsync'ing from a remote machine over ssh to a Linux box with an NTFS-3G (FUSE) filesystem. Originally the filesystem was mounted at boot time and thus owned by root, and I was getting this error message when I did an rsync push from the remote machine. Then, as the user to which the rsync is pushed, I did:
$ sudo umount /shared
$ mount /shared
and the error messages went away.
The group user name for the destination directory and sub directories should be same as per the user.
if the user is 'abc' then the destination directory should be
lrwxrwxrwx 1 abc abc 34 Jul 18 14:05 Destination_directory
command chown abc:abc Destination_directory
Surprisingly nobody have mentioned all powerful SUDO.
Had the same problem and sudo fixed it
run in root access ssh chould solve this problem
or chmod 0777 /dir/to/be/backedup/
or chown username:user /dir/to/be/backedup/

Resources