Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
I've mounted an SSD to /mnt/SATA-SSD which has a single exfat partition.
I'd like to make the permissions for the mount point "drwxrwxrwx" but no matter what I try, the permissions wont change from "drwxr-xr-x".
I tried going into the terminal and doing "sudo chmod 777 /mnt/SATA-SSD" which resulted in the permissions remaining at "drwxr-xr-x". I've also tried the same command as root.
I also tried in Dolphin to change the permissions and the write permissions were greyed out.
I'm using Kubuntu 22.10.
It's a drive full of data so I don't really want to reformat it.
I've mounted a few other drives (1. NTFS HDD 2. NTFS SSD 3. NTFS External SSD) in the same way and they don't seem to have this issue. I cant imagine the file format is the issue but at this point I have no idea.
I'm kind of at a loss for how this isn't working. Is there a way to force the permissions to change that's more forceful than chmod or could there some other reason why chmod isn't changing the permissions?
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Problem:
I need to use an external drive (encrypted ext4) to share files between two different Ubuntu 16.04 machines (home and work).
However, the machines also have different user name account logins ("home", "work").
I cannot figure out how to give both accounts access to files created by both accounts.
Code run:
I ran the nuclear option from the work account (below), which I thought would achieve this, but I still don't have permission to access directories created by the work machine, on the home machine.
sudo chown -R $USER /media/$USER/SSD-1TB
sudo chmod -R 0777 /media/$USER/SSD-1TB
Desired outcome:
Read/write permissions on an external drive for any user account from any Ubuntu machine that I plug it into.
Thanks!
Check your umask value. More info: https://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html
umask is used for setting the default file permissions. The issue with your above approach is that you have updated existing files with 0777 but new files get created with the default. I recommend you update both "work" and "home" users to use the same primary group then you can set umask 002 which will cause the new files to be written with 0664 and therefore they will be writable/readable by the group on both machines.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am trying Linux UBUNTU on my windows system. I am knew in Linux UBUNTU.but I don't know that where I can find my files that were on windows 10.
In case you mean WSL Ubuntu:
Your Harddrives are mounted under /mnt/. I like to create a symbolic links to them in my home folder. The should be named like so:
/mnt/c # your C:\ drive
/mnt/s # your S:\ drive
...
In case you mean a Linux livesystem:
If you use a system with a graphical user interface, somewhere in your filemanager you should see the respective drive. Click on it and it should auto-mount. Afterwards you should be able to access your files just as you would expect via the filemanager.
In case you're in terminal mode (= you do not have any graphical user interface), things might get a little strange from a beginner's perspective. In this case I would recommend that you make yourself familiar with the rough structure of the linux filesystem and the commands mount, umount and sudo. Generaly said you will have to do the same thing your filemanager does for you. You mount the drive somewhere in the filesystem tree and then access the folder:
mount /dev/<drive> <directory> # mount your drive into the fs tree
cd <directory> # switch to that folder
ls # should display your drive's content
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Apart from the main disk, I used to have another disk attached to the host which is /dev/vdb. In order to mount it automatically, I made changes in the /etc/fstab file.
While after I removed /dev/vdb, my CentOS 6.7 fails to start, and always stops with an error saying /dev/vdb is unavailable.
At this point, I can type the root password to enter into the command prompt, while when I trying to edit /etc/fstab file, it shows it is in read-only mode. I can't understand why it's read-only since I'm already the "root".
I'm sure removing the problematic line from fstab file would resolve my problem, but no idea of how to override the file. Can somebody help me out?
Use a live-cd or -dvd to boot up your system (I find puppy linux works well for this task). Locate your /etc/fstab file on the hard drive and edit it manually.
Exit from the live-cd session, remove the disk and reboot normally. If /etc/fstab has been rewritten correctly you should start normally.
An alternative of course, is to reinstall the /dev/vdb device if possible and edit /etc/fstab if doing that boots correctly.
If it is a Virtual machine than you can mount the fs and make changes in /etc/fstab umount it and boot vm. or you can chroot into vm as well with
sudo chroot /path
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I just did something really dumb and I'm wondering if there's any way to reverse it. So I have an AWS EC2 server instance and I was trying to edit the php.ini in /etc. It kept telling me that I didn't have permissions to write to it, so I just thought, "Okay, well nobody's ever really going to see this site, I'll just chmod everything." So I did chmod -R 750 ... I know... I know... What was I thinking. So now it messed everything up and I can't SSH in with my ec2-user login anymore... Is there any way to fix this or did I just permanently wreck it?
If you are using an EBS backed instance, you can recover the SSH access doing this:
Stop your crashed instance
Detach the EBS root device
Create a new instance
Attach the EBS to the new instance and mount it in /mnt
Fix your file permissions in /mnt/home/user/.ssh
Unmount and detach the EBS
Attach it to the crashed instance and start it
You should have ssh access, but note this won't fix all crashed files or directories. It only will give you ssh access, then you have to fix your files permissions. Otherwise, you do this in step 5.
Luck!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I created a fuse mountpoint. After mounting, the file permissions are all screwed up and it says I cannot ls or cd. Permission denied.
The file permissions look like this:
d????????? ? ? ? ? ? temp
and when i list the mounted devices I get:
/dev/fuse on /temp type fuse (rw,nosuid,nodev)
I used mono-fuse. I just created a new folder with permissions 777 and then did a mount. After unmounting I can do all operations, but when I mount, I get such error.
I used
HelloFS.exe that comes along with mono fuse for testing.
Can someone shed some light on this weird behavior and what mistake i have done.
I expect there is an incompatiblity with the userspace fuse library you're using and the kernel fuse version. This results in the kernel not understanding responses and it returning and EIO for everything (including the stat calls that "ls" does).
You should try increasing the debug level. As it's a Mono / CLR application, ensure that the libraries are of an appropriate version for your kernel; you may not need to recompile it.
You should also note that when you mount a directory, the mount-point's original permissions are ignore (and hence need not be 0777) ; the root directory of the new filesystem takes its place.
(You should probably not mount such a filesystem in /temp either; it is an example not for temp files)