Content of mounted folder deleted on Ubuntu reboot - linux

I have created a chrooted sftp user, and mounted a directory to the users chrooted home.
Within this directory I have one directory for each website the sftp user has access to.
When I rebooted my Ubuntu 10.04 server, the content of the mounted folder is gone.
/home/chrootedUser/websites/website1
To my frustration the website1 directory is gone/deleted.
My /etc/fstab config:
http://pastebin.com/gxz3w9Mg
My mounts (using command mount):
http://pastebin.com/XcGGvGVE
I hope someone can point me in the right direction, please let me know if you need anything else.

Unmount /home/chrootedUser/websites and your files will be there. Probably your mount didn't work for the first time when you were creating those files. But now it works.
fstab should do automounting for you just fine. It's difficult to tell what exactly went wrong, you can read /proc/self/mounts to check your mounts.

Related

Unable to create / edit files as non-root through Samba mount

I'm trying to setup a code-server (vscode in browser) instance and read/write from a mounted samba share. Unfortunately when I try to add a file it gives me an error that I do not have permissions to read/write to that folder. When I try to add files with the same credentials on Windows it does work though. This is the error that VSCode gives me:
Unable to write file
'vscode-remote://localhost:8080/home/user/repository/test'
(NoPermissions (FileSystemError): Error: EACCES: permission denied,
open '/home/gmetitieri/user/test')
If I sudo touch file.txt then the file will be created and added. I already used chmod and added full access to the folder but it still won't work. Is this a credentials thing or am I missing something?
I already tried this answer but it still doesn't let me write as non-root
Edit: This is the command I used to mount the drive (just with different folder names and IP address):
sudo mount -t cifs -o rw,vers=3.0,credentials=/root/.examplecredentials //192.168.18.112/sharedDir /media/share
Considering "non-root through Samba", especially in new releases of OpenSuse (...15.3 -- 15.4), I do few movements into normal configuration panels (no sudo commands or anything technical).
Using Yast Firewall section -- For now (immediate solution):
I turn off the firewall, then see what you can turn on (after this) to keep the samba working with Microsoft Windows.
More details on how to do this with images on my website.
This happens when the directory on the Samba share does not have permission for non-root users.
In your smb4.conf file:
[test]
comment = Test share
path = /path/to/directory
force user = unixuser
valid users = sambauser
In this example, unixuser should be the owner of the files in /path/to/directory. The user logged into Samba in this example is a user called sambauser.

Linux AWS EC2 Permissions with rsync

I am running a default t2.nano ec2 linux ami. Nothing is changed on it. I am trying to rsync my local changes to the server. There is a permissions issue that I don't know enough about to fix.
My structure is as follows. I'm trying to push my work to the technology directory. The technology directory is mapped to a staging domain. i.e. technology.staging.com
:/var/www/html/technology
this is from the root, and it does work fine, it's the rsync that is failing.
when I push locally to that directory I get a "failed: Permission denied (13)" error.
I'm running an nginx server and assigned permissions to the www directory as follows:
sudo chown -R nginx:nginx /var/www
My user is ec2-user which is the normal default. Here is where I am tripped up. You can see the var directory is given root access.
You can see that the www directory then has permissions set to nginx so our server can access the files. I believe I need to add the ec2-user to this directory as well as the nginx user so that I can rsync my files there and the server will still have access I'm just unsure of how to do that.
As a test, I created a test directory at this location and it worked successfully.
:/home/ec2-user/test
you can see the permission here are set for the ec2-user which is why it works i'm sure.
Here's the command I'm running on my local machine to rsync my files which fails.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/var/www/html/technology
Here's the command that was working.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/home/ec2-user/test
I have done enough research and testing to know that it's a permissions error, I just can't figure out the right way to solve it. Do I need to create a group and assign both the nginx and ec2-user to the group and then give that group the same permissions level on the :/var directory.
Side note, what permissions level do I set for the chown to make these permissions that are currently set?
I have server config files in the :/etc/nginx/conf.d/ directory that map to the directories I create inside of :/var/www/html directory so I can have multiple sites hosted on the server.
So in this example, I have a config file at :/etc/nginx/conf.d/technology.conf which maps to the directory at :/var/www/html/technology
Thank you in advance, again, I do feel like I have put forth the research and effort to show that I've gone as far as I know how to do.
The answer made sense after I spent roughly a day playing around. You have to give access to both the ec2-user and the nginx group. I believe you never want to put a user in a group that involves the server itself, I think things would go south.
After changing the owner to both the ec2-user and nginx group, it still didn't work exactly the way I wanted it to. The reason was, I needed the nginx permissions to be updated to what they had when they were assigned the user role.
Basically, theec2-user had write permissions and the server did not. we wanted the user to have write permissions so they could rsync my local files to the directory on the server, and the nginx group needed the same level of permissions to display the pages. Now that I think about it, the nginx group may have only needed read permissions to display things, but this at least solved the problem for now.
Here is the command I ran on the server to update the ownership and the permissions, as well as the output.
modify ownership
sudo chown -R ec2-user:nginx :/var/www/html/technology
modify permissions
sudo chmod -R o=rwx,g+rwx,o-w technology
The end result looks like this
You can see the permissions match, and the ownership is as we expected. The only thing I have to figure out is after I rsync new files to the server, I need to run the previous code to update the permissions again. I'm sure that will come to me later, but I hope this helps anyone in the same situation.

NFS mount using CHEF on LINUX | permissions of directory not getting changed

I am trying to do an NFS mount using CHEF. I have mounted it successfully. Please find the below code.
# Execute mount
node['chef_book']['mount_path'].each do |path_name|
mount "/#{path_name['local']}" do
device "10.34.56.1:/data"
fstype 'nfs'
options 'rw'
retries 3
retry_delay 30
action %i[mount enable]
end
end
i am able to successfully mount and make an entry in fstab file. But, after mounting the user:group for the mount linked is changing to root:root , which i was not expecting.
i want to use myuser:mygroup as owner:group. I tried changing the same using chown command but am getting permission denied issue
request some guidance
As mentioned in the comment, this is not something Chef controls per se. After the mount, the folder will be owned by whatever the NFS server says. You can try to chmod the folder after mounting but that's up to your NFS configuration and whatnot as to if it will be allowed.

Raspbian Wheezy Owncloud and NFS together

I am trying to setup a file/DLNA server on raspberry pi (raspbian wheezy) for the files to be shared by all the devices I use - android and Linux to the minimum.
I have a USB drive with some decent storage where I have all my files. So far, I had NFS and dlna serving the USB drive contents.
Recently, I installed owncloud. It required the owncloud data directory to be owned by www-data. I have mounted (from fstab) the USB drive with options rw,user,uid=33,gid=33,mask=007. The owncloud worked fine (though it is very slow to render the contents).
My nfs exports is as follows:
/owncloud_data/mystuff *(rw,all_squash,anonuid=33,anongid=33,no_subtree_check)
My shomount -e localhost displays the following:
Export list for localhost:
/owncloud_data/mystuff (everyone)
However, when I issue
sudo mount localhost:/owncloud_data/mystuff /my_nfs
I get the following error:
mount.nfs: access denied by server while mounting localhost:/owncloud_data/mystuff
I don't understand why. I kind of guess that this is because the /owncloud_data/mystuff is owned by the www-data. But, the nfs-server is run as root; should it not be able to read the data? Or am I missing anything in this regard? I dont get any useful logs in the /var/log/messages; I tried including the --debug all option in the nfs config.
I haven't started with the dlna yet (I have installed minidlna which was working with NFS before I installed the owncloud).
OR, is there a better solution for what I am trying to do?
Please let me know if you need more information in this regard.
Thanks
I wont tick this as an answer. It is a work around.
The problem is if I export the /owncloud_data/mystuff the nfs mount is not working. If I export all /owncloud_data, it is working fine (along with the export options I have mentioned in the original post). I just mount /owncloud_data/mystuff on the client side (though technically I can mount /owncloud_data there).
I will be happy if anybody can explain this behaviour and solve to export /owncloud_data/mystuff.

Mounting a folder from other machine in linux

I want to mount a folder which is on some other machine to my linux server. To do that i am using the following command
mount -t nfs 192.xxx.x.xx:/opt/oracle /
Which is executing with the following error
mount.nfs: access denied by server while mounting 192.xxx.x.xx:/opt/oracle
Do anyone knows what's going on ??? I am new to linux.
Depending on what distro you're using, you simply edit the /etc/exports file on the remote machine to export the directories you want, then start your NFS daemon.
Then on the local PC, you mount it using the following command:
mount -t nfs {remote_pc_address}:/remote/dir /some/local/dir
Please try with your home directory as per my knowledge you can't dump anything directly on root like that.
For more reference, find full configuration steps here.

Resources