cygwin sets file permission to 000 - cygwin

I have a folder /cygwin/d/myfolder/
And everytime I save files there, from cygwin if i do an ls -la I see that the files are given permission 000. That actually causes me quite a bit of problem as I rsync this folder to my server and none of the files are accessible. How can I get the files to automatically get a reasonable permission?

Have a read through the answers at this link:
http://cygwin.1069669.n5.nabble.com/vim-and-file-permissions-on-Windows-7-td61390.html
The solution there worked for me also:
Edit /etc/fstab and add this line at the end of the file:
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0
Then close all Cygwin processes, open a new terminal and ls -l on your files again.
Explanation:
By default, Cygwin uses the filesystem's access control lists (ACLs) to implement real POSIX permissions. Some Windows-native program or process may create or modify the ACLs such that Cygwin computes the POSIX permissions as 000. With the noacl mount option, Cygwin ignores filesystem ACLs and only fakes a subset of permission bits based on the DOS readonly attribute.

Check to make sure that your umask is set correctly with the umask command. If your umask is say 0777 that subtracts from the permissions of new files and will end up with 000 permissions. There's probably several other possibilities to consider beyond that.
If your id is not set up correctly in /etc/passwd and /etc/group that can also cause ls to show unexpected results. Check the permissions of the folder. Also check the Windows permissions with the getfacl command. Maybe also check the mount command.

In above answer, solution was proposed:
Edit /etc/fstab and add this line at the end of the file:
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0
And in that answer there was this comment:
When I try this, all my files are -rw-r--r-- no matter what chmod() I do. I can't mark the files as executable; it just reverts to 0644. (umask==0022)
I had this same problem, but it manifested in inability to execute DOS batch files (*.bat) when running Cygwin ksh or mksh. I stumbled across this website: http://pipeline.lbl.gov/code/3rd_party/licenses.win/cygwin-doc-1.4/html/faq/ which contains this helpful advice:
Note that you can use mount -x to force Cygwin to treat all files under the mount point as executable. This can be used for individual files as well as directories. Then Cygwin will not bother to read files to determine whether they are executable.
So then cross-referencing with this page - https://cygwin.com/cygwin-ug-net/using.html#mount-table - with its advice:
cygexec - Treat all files below mount point as cygwin executables.
I added cygexec to fourth field of my fstab. This did it. My .bat is now executable inside ksh/mksh, which is necessary since I'm running a Jenkins job that calls a Korn shell stack 3 files deep, that I have no modifiable control over. I just needed the .bat to run!
Update: the solution above wasn't quite what I needed, on further testing. It resulted in some executables such as javac and cl to behave oddly (the utilities would print their usage and exit). I think what I needed instead of 'cygexec' was just 'exec'. As the same page notes:
exec - Treat all files below mount point as executable.

On my Win7 PC files were usually
----------+ 1 David None 69120 Jun 17 13:17 mydoc.txt
I tried all of above no luck
Turned out I still had some old historical mount entries in my .zshrc
I deleted these and Bob's your Uncle problem gone away!

Related

Execute a bash script without typing ./ [duplicate]

I feel like I'm missing something very basic so apologies if this question is obtuse. I've been struggling with this problem for as long as I've been using the bash shell.
Say I have a structure like this:
├──bin
├──command (executable)
This will execute:
$ bin/command
then I symlink bin/command to the project root
$ ln -s bin/command c
like so
├──c (symlink to bin/command)
├──bin
├──command (executable)
I can't do the following (errors with -bash: c: command not found)
$ c
I must do?
$ ./c
What's going on here? — is it possible to execute a command from the current directory without preceding it with ./ and also without using a system wide alias? It would be very convenient for distributed executables and utility scripts to give them one letter folder specific shortcuts on a per project basis.
It's not a matter of bash not allowing execution from the current directory, but rather, you haven't added the current directory to your list of directories to execute from.
export PATH=".:$PATH"
$ c
$
This can be a security risk, however, because if the directory contains files which you don't trust or know where they came from, a file existing in the currently directory could be confused with a system command.
For example, say the current directory is called "foo" and your colleague asks you to go into "foo" and set the permissions of "bar" to 755. As root, you run "chmod foo 755"
You assume chmod really is chmod, but if there is a file named chmod in the current directory and your colleague put it there, chmod is really a program he wrote and you are running it as root. Perhaps "chmod" resets the root password on the box or something else dangerous.
Therefore, the standard is to limit command executions which don't specify a directory to a set of explicitly trusted directories.
Beware that the accepted answer introduces a serious vulnerability!
You might add the current directory to your PATH but not at the beginning of it. That would be a very risky setting.
There are still possible vulnerabilities when the current directory is at the end but far less so this is what I would suggest:
PATH="$PATH":.
Here, the current directory is only searched after every directory already present in the PATH is explored so the risk to have an existing command overloaded by an hostile one is no more present. There is still a risk for an uninstalled command or a typo to be exploited, but it is much lower. Just make sure the dot is always at the end of the PATH when you add new directories in it.
You could add . to your PATH. (See kamituel's answer for details)
Also there is ~/.local/bin for user specific binaries on many distros.
What you can do is add the current dir (.) to the $PATH:
export PATH=.:$PATH
But this can pose a security issue, so be aware of that. See this ServerFault answer on why it's not so good idea, especially for the root account.

Prevent a Unix domain socket file in the filesystem from being deleted while socket is bound

Is it possible on Linux or MacOSX to prevent a Unix domain socket file (e.g. in /tmp) that is currently bound from being deleted? I want a mode 0777 socket that users can connect to but that users cannot delete while the daemon is running.
Right now a normal user can 'rm' the socket, preventing anyone else from accessing it until the daemon is restarted. Seems like it should be 'busy' if it's bound.
You could make a new subdirectory and set read only permissions on the directory after you make the socket:
mkdir /tmp/blah
cd /tmp/blah
# do stuff to create /tmp/blah/socket
chmod 555 /tmp/blah
rm /tmp/blah/socket
rm: cannot remove /tmp/blah/socket: Permission denied
(or the equivalent to that from C / your language of choice)
It depends entirely on the directory that contains the socket. /tmp is somewhat special in that it has the "sticky bit" set on the directory (if you execute ls -ld /tmp you will see the permissions field is usually: drwxrwxrwt or, more usefully, mode 1777. That sticky bit (the t at the end) is important when set on a directory. Quoting man chmod:
The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents
unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the re‐
stricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp. For regular files on some older
systems, the bit saves the program's text image on the swap device so it will load more quickly when run; this is called the sticky bit.
This is exactly what you want - file-system level protection against a user removing the file. It is also 100% portable to all modern UNIX-like environments.
So, if you are creating your endpoint in /tmp you already have the protections you want. If you want to create the endpoint elsewhere, for example /opt/sockets, simply chmod 1777 /opt/sockets. The last part of the "trick" to getting the protections you want is to ensure that the root user is the actual owner of the endpoint. If the endpoint is owned by user fred then fred will always be able to delete the endpoint, which may well be a desirable thing. But if not, simply chown root:root /path/to/endpoint.

cygwin slow file open

My application uses fopen to open a lot of files. While in linux opening and reading thousand of files doesn't even take a second; in cygwin it takes more than 5 seconds.
I think it is because path conversion functions in cygwin dlls. 'open' function is a bit faster. If I use -mno-cygwin it becomes very fast but I can't use it.
Is there an easy way to make cygwin dlls just open files; without any linux-windows conversion?
It depends on how the system was mounted in the Cygwin environment.
$ mount
C:/cygwin/bin on /usr/bin type ntfs (binary,auto)
C:/cygwin/lib on /usr/lib type ntfs (binary,auto)
C:/cygwin on / type ntfs (binary,auto)
C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
D: on /cygdrive/d type iso9660 (binary,posix=0,user,noumount,auto)
The mount option "binary" makes it so CRLF <-> LF conversions are not performed on files read from the volume. This is default.
Some things you can do to speed up a Cygwin prompt are the following:
Add the following lines to your ~/.bashrc:
# eliminate long Windows pathnames from the PATH
export PATH='/bin:/usr/bin:/usr/local/bin'
# check the hash before searching the PATH directories
shopt -s checkhash
# do not search the path when .-sourcing a file
shopt -u sourcepath
Disconnect your network drives.
Disable your antivirus, or otherwise exclude Cygwin's folders from its scans.
Thorough antivirus programs scan files for malware as they're opened by programs, and this means it'll be working overtime if your script is opening thousands of files.
Use the option --cache-file="$HOME/.config.cache" when running autotools configure scripts.
This will create a file that holds prerecorded configure discoveries, most of which are usable between software builds. (This is also a good idea when using Linux).
Since the shell seems to be the bottleneck of the Cygwin system, a huge script that relies on starting a large number of processes will take forever and this will cut down on the number of processes it needs to start.
Set up Cygwin's sshd and stop using Windows Command Prompt in favor of PuTTY.
PuTTY responds better to changing text on the screen, as it was built for the more mature CLI interface of *NIX.

Linux folder permissions

At my office, we have a network directory structure like this:
/jobs/2004/3999-job_name/...
/jobs/2004/4000-job_name/...
The issue is that employees rename the "4000-job_name" folders (which in turn breaks other things that rely on the name being consistent with a database).
How can I stop users from renaming the parent folder while still allowing them full control of that folder's contents?
Please keep in mind that this is a Samba share that Windows users will be accessing.
I think you want to do this:
chmod a=rx /jobs #chdir and lsdir allowed, modifying not
chmod a=rwx /jobs/* #allow everything to everyone in the subdirectories
Since the directories /jobs/* are in fact files in /jobs their names cannot be changed without the write permission for /jobs. In the subdirectories of /jobs/ everyone is allowed to do anything with the commands above.
Also, be sure to set the permissions of new directories to rwx as you add them.
(edit by Bill K to fix the examples--the solution was correct but he misread the question due to the strange coloring SO added)
The question has already been answered, so I'm just gonna make a brief remark: in your question, you use the terms "folder" and "directory" interchangeably. Those two are very different, and in my experience 99% of all problems with Unix permissions have to do with confusing the two. Remember: Unix has directories, not folders.
EDIT: a folder is two pieces of cardboard glued together, that contain files. So, a folder is a container, it actually physically contains the files it holds. So, obviously a file can only be in one container at a time. To rename a file, you not only need access to the folder, you also need access to the file. Same to delete a file.
A directory, OTOH, is itself a file. [This is, in fact, exactly how directories were implemented in older Unix filesystems: just regular files with a special flag, you could even open them up in an editor and change them.] It contains a list of mappings from name to location (think phone directory, or a large warehouse). [In Unix, these mappings are called links or hardlinks.] Since the directory only contains the names of the files, not the files themselves, the same file can be present in multiple directories under different names. To change the name of a file (or more precisely to change a name of a file, since it can have more than one), you only need write access to the directory, not the file. Same to delete a file. Well, actually, you can't delete a file, you can only delete an entry in the directory – there could still be other entries in other directories pointing to that file. [That's why the syscall/library function to delete a file is called unlink and not delete: because you just remove the link, not the file itself; the file gets automatically "garbage collected" if there are no more links pointing to it.]
That's why I believe the folder metaphor for Unix directories is wrong, and even dangerous. The number one security question on one of the Unix mailinglists I'm on, is "Why can A delete B's files, even though he doesn't have write access to them?" and the answer is, he only needs write access to the directory. So, because of the folder metaphor, people think that their files are safe, even if they are not. With the directory metaphor, it would be much easier to explain what's going on: if I want to delete you from my phonebook, I don't have to hunt you down and kill you, I just need a pencil!
If you make the parent directory--/jobs/2004/--non-writable for the users, they won't be able to rename that folder.
I did the following experiment on my own machine to illustrate the point:
ndogg#seriallain:/tmp$ sudo mkdir jobs
ndogg#seriallain:/tmp$ sudo mkdir jobs/2004
ndogg#seriallain:/tmp$ sudo mkdir jobs/2004/3999-job_name/
ndogg#seriallain:/tmp$ cd jobs/2004/
ndogg#seriallain:/tmp/jobs/2004$ sudo chown ndogg.ndogg 3999-job_name/
ndogg#seriallain:/tmp/jobs/2004$ ls -alh
total 12K
drwxr-xr-x 3 root root 4.0K 2009-03-13 18:23 .
drwxr-xr-x 3 root root 4.0K 2009-03-13 18:23 ..
drwxr-xr-x 2 ndogg ndogg 4.0K 2009-03-13 18:23 3999-job_name
ndogg#seriallain:/tmp/jobs/2004$ touch 3999-job_name/foo
ndogg#seriallain:/tmp/jobs/2004$ mv 3999-job_name/ blah
mv: cannot move `3999-job_name/' to `blah': Permission denied

Setting default permissions for newly created files and sub-directories under a directory in Linux?

I have a bunch of long-running scripts and applications that are storing output results in a directory shared amongst a few users. I would like a way to make sure that every file and directory created under this shared directory automatically had u=rwxg=rwxo=r permissions.
I know that I could use umask 006 at the head off my various scripts, but I don't like that approach as many users write their own scripts and may forget to set the umask themselves.
I really just want the filesystem to set newly created files and directories with a certain permission if it is in a certain folder. Is this at all possible?
Update: I think it can be done with POSIX ACLs, using the Default ACL functionality, but it's all a bit over my head at the moment. If anybody can explain how to use Default ACLs it would probably answer this question nicely.
To get the right ownership, you can set the group setuid bit on the directory with
chmod g+rwxs dirname
This will ensure that files created in the directory are owned by the group. You should then make sure everyone runs with umask 002 or 007 or something of that nature---this is why Debian and many other linux systems are configured with per-user groups by default.
I don't know of a way to force the permissions you want if the user's umask is too strong.
Here's how to do it using default ACLs, at least under Linux.
First, you might need to enable ACL support on your filesystem. If you are using ext4 then it is already enabled. Other filesystems (e.g., ext3) need to be mounted with the acl option. In that case, add the option to your /etc/fstab. For example, if the directory is located on your root filesystem:
/dev/mapper/qz-root / ext3 errors=remount-ro,acl 0 1
Then remount it:
mount -oremount /
Now, use the following command to set the default ACL:
setfacl -dm u::rwx,g::rwx,o::r /shared/directory
All new files in /shared/directory should now get the desired permissions. Of course, it also depends on the application creating the file. For example, most files won't be executable by anyone from the start (depending on the mode argument to the open(2) or creat(2) call), just like when using umask. Some utilities like cp, tar, and rsync will try to preserve the permissions of the source file(s) which will mask out your default ACL if the source file was not group-writable.
Hope this helps!
It's ugly, but you can use the setfacl command to achieve exactly what you want.
On a Solaris machine, I have a file that contains the acls for users and groups. Unfortunately, you have to list all of the users (at least I couldn't find a way to make this work otherwise):
user::rwx
user:user_a:rwx
user:user_b:rwx
...
group::rwx
mask:rwx
other:r-x
default:user:user_a:rwx
default:user:user_b:rwx
....
default:group::rwx
default:user::rwx
default:mask:rwx
default:other:r-x
Name the file acl.lst and fill in your real user names instead of user_X.
You can now set those acls on your directory by issuing the following command:
setfacl -f acl.lst /your/dir/here
in your shell script (or .bashrc) you may use somthing like:
umask 022
umask is a command that determines the settings of a mask that controls how file permissions are set for newly created files.
I don't think this will do entirely what you want, but I just wanted to throw it out there since I hadn't seen it in the other answers.
I know you can create directories with permissions in a one-liner using the -m option:
mkdir -m755 mydir
and you can also use the install command:
sudo install -C -m 755 -o owner -g group /src_dir/src_file /dst_file

Resources