Can create and delete files on mounted file system yet not create links or change ownership - linux

I have a Xubuntu install running in a VM (VirtualBox) on a Windows 10 host. There is a directory on the Windows file system which I have mounted in the guest as a vboxsf. I think it's a Linux problem but that's the background in case it's relevant.
I have write access to this directory and all files within it (everything is -rwxrwxrwx). I can create, modify and delete files and directories in it. But trying to create a soft link (ln -s) or chown a file or directory to a different owner produces the following message:
ln: failed to create symbolic link 'myLink': Read-only file system
Have tried everything I can think of including unmounting and re-mounting. I don't understand how I am able to write, modify and delete files, yet a symbolic link produces "read only". Chown completes without an error or warning, but still hasn't changed ownership when done.

So eventually I found the answer to this. It's a bug / design decision in VirtualBox itself. See here:
https://www.virtualbox.org/ticket/10085
They used to support it and then realised it enabled a very hard to fix security vulnerability and so deliberately disabled linking in their shared folders. There's no great work around. You can edit your
to add the following:
<ExtraDataItem name="VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARE_NAME" value="1"/>
That comes with security risks so you must trust your guest. You can also (and this is what I might do) create an NFS mount point and connect to it in more old school ways.

Related

VSCode - what exactly --user-data-dir is specifiying

What exactly is --user-data-dir specifiying?
From --help parameter:
--user-data-dir <dir> Specifies the directory that user data is kept in. Can be used to open multiple distinct instances of Code.
Is it storing some temporary files there?
Is it about the access path to config files?
I am asking as I want to run VSCode (or Codium to be more exact) with sudo (I want to edit system config file that is read restricted) which requires this parameter for reasons unclear to me.
Since sudo-ing VS Code at command-line launch is only a thing on Linux, this question assumes you're on Linux, and restricts its context to Linux.
TL;DR
To answer your question directly: the user-data-dir parameter points to a folder where all personalisation except extensions resides — unique to each user.
Why does sudo-ing Code need --user-data-dir?
In typical installations of either OS and VS Code, this folder owned by the regular user cannot be accessed by the superuser.
Hence a VS Code session running with effective UID=0 tries but fails to write to the invoking user's (not the superuser's) config folder. This is what the error message prevents from happening, by forcing the user to provide an explicit root-accessible folder.
Detailed Explanation
There are two main folders that VS Code uses to store configuration data:
An extensions folder (self explanatory) — contained in ~/.vscode[1]
user-data-dir; a folder for all other personalisable things (settings, keybindings, GitHub/MS account credential caches, themes, tasks.json, you name it)[2]
On Linux the latter is located in ~/.config/Code, and has file permissions mode 0700 (unreadable and unwritable by anybody other than the owner).
This causes issues, as VS Code tries and fails to access said directory. The logical solution is to either modify the permissions (recursively) of ~/.config/Code to allow root access, or — arguably saner and objectively more privacy-respecting — to use a separate directory altogether for the sudo'ed VS Code to access.
The latter strategy is what the community decided to adopt at large; this commit from 2016 started making it compulsory to pass an explicit --user-data-dir when sudo-ing VS Code on Linux.
Should You be Doing This in the First Place?
Probably not! If your goal is to modify system config files, then you could stick to an un-elevated instance of Code, which would prompt you to Save as Admin... when you try to save. See this answer on Ask Ubuntu on why you probably want to avoid elevating VS Code without reason (unless you understand the risks and/or have to), and this one on the same thread on what you could do instead.
However, if the concerned file is read-restricted to root as well, as in the O.P’s case, then you hardly have a choice 😕; sudo away! 😀
[1] & [2]: If you want to know more about the above two folder paths on different OSes, see [1] and [2]
Hope this was helpful!
It might be helpful to easily find the default location of the user-data-dir on any OS. It can be found with this command:
Developer: Open User Data Folder
workbench.action.openUserDataFolder
which is in the Insiders Build v1.75 now, Stable soon. Opens your OS file explorer app to the location.

Perforce messes up symlinks

When I download source code from perforce, the symlinks gets downloaded as files and the project, of course, doesn't build. This happens on certain computers and virtual machines but the same symlinks download fine on other computers.
The download file is often a short file which just contains path of the linked file instead of being zero byte symlink file.
This actually had to do with user permissions on windows, not so much with perforce. The problem is that the user doesn't have permission to create symlinks so perforce ends up creating a file (In my opinion, it should generate an error message instead of converting the symlink to file).
The simple solution in most cases should be to start P4V as administrator and then download the source code. You may have to force it to download everything since it will not re-download wrong symlinks because those objects already exist on disk.
You can check if you have permissions with the following command. More here.
mklink <linkFile> <ExistingFile>
Note: you may well be able to create symlinks (=shortcuts) using File Explorer but it's the command line (above) that will determine if you have the privileges or not.

Renaming executable's image name is giving it write permission

Dear community members,
We have three of same hardware Windows 7 Professional computers. No one of them is connected to a domain or directory service etc.
We run same executable image on all three computers. In one of them, I had to rename it. Because, with my application's original filename, it has no write access to it's working directory.
I setup full access permisions to USER group in working directory manually but this did not solve.
I suspect some kind of deny mechanism in Windows based on executable's name.
I searched the registry for executable's name but I did not find something relevant or meaningfull.
This situation occured after lot of crashes and updates of my program on that computer (I am a developer). One day, it suddenly started not to open files. I did not touch registry or did not change something other on OS.
My executable's name is karbon_tart.exe
When it start, it calls CreateFile (open mode if exist or create mode if not exist) to open karbon_tart.log file and karbon_tart.ini file.
With the files are exist and without the file exists, I tried two times and none of them, the program can open the files.
But if I just rename the name to karbon_tart_a.exe, program can open files no matter if they are exist or not.
Thank you for your interest
Regards
Ömür Ölmez.
I figured out at the end.
It is because of an old copy of my application in Virtual Store.

Linux application in filesystem sandbox

Is it possible to install and run applications using the regular filesystem but make created files and changes written to a specific directory?
I want to make an application believe it is installed to the system root and remove it by just deleting one folder from my home directory. A lightweight solution would be great!
It should be possible by combining unionfs and namespace. Create a mount namespace (using unshare(1)), mount a unionfs over everything and run the application there (I haven't done it myself, so no example commands, sorry).
Take a look at mbox http://pdos.csail.mit.edu/mbox/
It intercepts system calls to a temporary directory which you can specify

WordPress unzip_file() results in mkdir_failed (permissions)

I am creating a WordPress framework that has an auto update facility. When the system updates the framework, it downloads a .zip file (works ok, stored in a temp folder), and afterwards tries to extract that zip file to a place within the theme. When unzipping, it throws an error complaining about not being able to create a directory ("mkdir_failed").
The parent of target folder has permission "775" for user "bitnami" and group "bitnami";
root#linux:/home/bitnami# ls -al /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus
...
drwxrwxr-x 6 bitnami bitnami 4096 Oct 23 14:02 nexusframework
...
And I tried to put the "daemon" user in the "bitnami" group;
usermod -a -G bitnami daemon
Which indeed is assigned correctly I would say, as i see:
root#linux:/home/bitnami# id daemon
uid=1(daemon) gid=1(daemon) groups=1(daemon),1000(bitnami)
So; if the "daemon" user is in the "bitnami" group and the folder has 775 access rights, then why does it fail with "mkdir_failed"?
(note; assigning "777" to the parent folder solves the problem, but this is not an option because of security).
Thanks!
- Gert-Jan
update;
After doing more investigation on Linux in general, I read that Linux automatically creates a 'private' group for each user (so bitnami group for the bitnami user, etc.). I don't know if the problem is caused by the fact that I was trying (and apparently succeeded?) to add other users to the same group or not.
update;
See my answer below on how I resolved my issue.
Ok, thanks for all the comments. I eventually decided not to continu my investigation but to head for another direction, as having to rely on the container's folder to have "775" permission would be unwise for the framework (many clients would have 755 instead, so getting this to work for a group is nice but would eventually not solve my problem).
Instead I further investigated how WordPress themselves download and unzip themes and decided to follow that route.
The key problem i was trying to tackle, was to not have the unzipped files be owned by the 'daemon' user, but by the 'bitnami' user. The reason why it "impersonated" to the daemon user, was because i manually told the code to use the "direct" fs_method (as it appears, WP offers various ways to interact with the filesystem, where the easiest one is 'direct', see here). However, using the 'direct' FS_METHOD is the core reason why I have this problem, as that one will use the credentials of the webserver (the 'daemon' user in my case). So by using a different FS_METHOD, I know am able to unzip the files in the folder, using the correct 'bitnami' user (since the container is owner and has permissions (775, or 755 wouldn't matter) now my problem is solved. Note that instead of writing directly to the filesystem, now PHP will use FTP (see here).
Does it work if you change the group of the folder to daemon?
chgrp -R daemon /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus

Resources