webdav files without all permissions go to lost+found - linux

I'm trying to connect the liferay 6.0.6 document library (installed in linux and running on tomcat) to an external mounted folder shared through webdav.
I'm using mount.dafvs to mount the folders in linux, but whenever I create or add via sftp a file to a mounted folder, it doesn't have the 777 permission as its folder and 95% of the times it simply goes to the lost+found folder, leaving there an empty file. So I can see files uploaded from the portal, but I can't upload files from my linux machine.
But if I change the permissions of the file to 777 and then I edit again or I upload it again from sftp replacing it, the file is there and can be seen/downloaded from the portal too!
any idea of why this is happening and how can I get this sharing works with R/W options from both sides?

Related

GCP Filestore error modifying shared folder contents with nodejs script under non-root user

I want to write a program that writes log files into the shared folder in a GCP compute engine. I used GCP filestore to mount the NFS folder in an ubuntu vm. After creating the folder, I noticed that I couldn't use cp to copy file to that folder unless I use sudo. When I ran the nodejs script, it also returned a permission denied error. However, I don't want to run the nodejs script with root. Is there a way to modify the set of permission so that I can write to the shared folder under the default, non-root user?
I modified the permission of the shared folder to 777 but it didn't work. I still cannot write to the folder.

Linux - Permissions to Create Directory / Delete Directory and Files / Change Permission for an entire Drive

I need your help, in order to allow me, to create directories, delete files, save files in directories. Right now, I'm unable to do anything in this Drive "Ubuntu-Dados".
I've already tried a bunch varieties of commands, from this site, and including running Nautilus, "sudo chmod -R -v 777 *", and nothing is working. Below is the result of my attempt to this issue. I need access to do anything in this Drive. - I'm using Ubuntu 20.04

How to extract/decompress this multi-part zip file in Linux?

I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.

Using tar -zcvf against a folder creates an empty compressed file

I am ssh'ed into an Acquia server trying to download some files. I need to backup these files for local development (to get user uploaded images mainly).
I am using the following command:
tar -zcvf ~/download/stage-files_3-19-2015_1344.tar.gz files/
I have read/write access to the download folder. I created that folder. I am in the parent folder of "files". And permissions to that folder are 777.
I was able to run this the other day with no issues. So I am very confused as to why this is happening now.
Actually I just figured this darn thing out. Must have run out of disk space because once I removed a prior compressed backup of the files it started running just fine. Dang disk quotas. Sorry guys.

phpstorm write issues in ./idea directory

When I try to save a file to disc within a project directory, I get this error:
java.io.IOException: W:\\[projectname]\\.idea not found
Some research tells me, the (network) location is not writable.
I'm trying to write this file from phpstorm in windows 8.
The drive (W:) is a network drive to a linux machine.
The directory I try to write to is chowned to the same user and group as I connect with in windows.
This is a result of ls -alh:
drwxrwxrwx 2 correct-user correct-user
On Linux and other Unix-like operating systems files starting with a . are considered 'hidden files' by default. As such, when the Windows-based program creates it, it suddenly doesn't see it anymore right after since it's hidden, even though the creation was successful. You can fix this in your Samba config by adding the following line to the share configuration:
hide dot files = no
In my samba settings I added a veto files parameter. Removing this parameter allows me to write dot files again.
Samba describes this setting as follows:
This is a list of files and directories that are neither visible nor accessible

Resources