Is there an API to set a NTFS ACL only on a particular folder without flowing permissions down? - multithreading

In my environment, I have several projects that involve running NTFS ACL audit reports and various ACL cleanup activities on a number of file servers. There are two main reasons why I cannot perform these activities locally on the servers:
1) I do not have local access to the servers as they are actually owned and administered by another company.
2) They are SNAP NAS servers which run a modified Linux OS (called GuardianOS) so even if I could get local access, I'm not sure of the availability of tools to perform the operations I need.
With that out of the way, I ended up rolling my own ACL audit reporting tool that would recurse down the filesystem starting at a specified top-level path and would spit out an HTML report on all the groups/users it encountered on the ACLs as well as showing the changes in permissions as it descended the tree. While developing this tool, I found out that the network overhead was the worst part of doing these operations and by multi-threading the process, I could achieve substantially greater performance.
However, I'm still stuck for finding a good tool to perform the ACL modifications and cleanup. Your standard out of the box tools (cacls, xcacls, Explorer) seem to be single-threaded and suffer significant performance penalty when going across the network. I've looked at rolling my own ACL setting program that is multithreaded but the only API I'm familiar with is the .NET FileSystemAccessRule stuff and the problem is that if I set the permissions at a folder, it automatically wants to "flow" the permissions down. This causes a problem because I want to do the "flowing" myself using multi-threading.
I know NTFS "allows" inherited permissions to be inconsistent because I've seen it where a folder/file gets moved on the same volume between two parent folders with different inherited permissions and it keeps the old permissions as "inherited".
The Questions
1) Is there a way to set an ACL that applies to the current folder and all children (your standard "Applies to files, folders, and subfolders" ACL) but not have it automatically flow down to the child objects? Basically, I want to be able to tell Windows that "Yes, this ACL should be applied to the child objects but for now, just set it directly on this object".
Just to be crystal clear, I know about the ACL options for applying to "this folder only" but then I lose inheritance which is a requirement so that option is not valid for my use case.
2) Anyone know of any good algorithms or methodologies for performing ACL modifications in a multithreaded manner? My gut feeling is that any recursive traversal of the filesystem should work in theory especially if you're just defining a new ACL on a top-level folder and just want to "clean up" all the subfolders. You'd stamp the new ACL on the top-level and then recurse down removing any explicit ACEs and then "flowing" the inherited permissions down.
(FYI, this question is partially duplicated from ServerFault since it's really both a sysadmin and a programming problem. On the other question, I was asking if anyone knows of any tools that can do fast ACL setting over the network.)

Found the answer in a MS KB article:
File permissions that are set on files
and folders using Active Directory
Services Interface (ADSI) and the ADSI
resource kit utility, ADsSecurity.DLL,
do not automatically propagate down
the subtree to the existing folders
and files.
The reason that you cannot use ADSI to
set ACEs to propagate down to existing
files and folders is because
ADSSecurity.dll uses the low-level
SetFileSecurity function to set the
security descriptor on a folder. There
is no flag that can be set by using
SetFileSecurity to automatically
propagate the ACEs down to existing
files and folders. The
SE_DACL_AUTO_INHERIT_REQ control flag
will only set the
SE_DACL_AUTO_INHERITED flag in the
security descriptor that is associated
with the folder.
So I've got to use the low-level SetFileSecurity Win32 API function (which is marked obsolete in its MSDN entry) to set the ACL and that should keep it from automatically flowing down.
Of course, I'd rather tear my eyeballs out with a spoon rather than deal trying to P/Invoke some legacy Win32 API with all its warts so I may end up just using an old NT4 tool called FILEACL that is like CACLS but has an option to use the SetFileSecurity API so changes don't automatically propagate down.

Related

How can I access files of different users and retain permission restrictions in Linux using Node.JS?

I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'

Mercurial ACL prevents pull

Please help me with understanding the mechanics behind Mercurial in combination with ACL.
Our team uses Mercurial as a versioning system. The setup is very simple: two developers (one linux, one windows), remote repo (linux). Every time, the windows user W checks in modifications and the linux user L wants to pull afterwards, the following error messages (depending on the altered file(s)) are displayed:
pulling from ssh://user#domain.com
searching for changes
adding changesets
transaction abort!
rollback completed
abort: stream ended unexpectedly (got 0 bytes, expected 4)
remote: abort: Permission denied: /repopath/.hg/store/data/paper/tmp.txt.i
This is because the file access is handled by linux's ACL lists. After the ACL permissions are corrected with the setfacl command, everything runs smooth and L is able to pull. Even if W clones the repo with correct permission, the (new/ modified) files in the .hg directory have the wrong (default) permissions. The parent folder of the repo has the correct permission set, so I don't know from where those permissions are inherited.
Did somebody face similar problems? Thank you in advance!
On my Linux box I had to set the sticky bit for group permissions. Essentially when you create a directory that will become a mercurial repository you need to use chmod g+s <repoDirectory> That will force anything created under that directory to have read/write/execute for group members regardless of what their defaults for file creation are. I was using standard Unix groups instead of ACL lists so I'm not sure how this will work out for you.
When creating new files inside .hg/store Mercurial copies the (classic) file permissions from the .hg/store directory itself and uses the user/group of the writing user unless overridden with something like the sticky group bit (as #Eric Y) mentioned. When modifying one of those files it retains the existing ownership and permissions if your user's umask allows it.
To my knowledge Mercurial does no special handling of file system level ACLs -- almost no tool does, which is why the ACL system also includes inheritance rules, where directories have their own ACLs and also have default ACLs that are inherited by newly created objects inside that directory -- maybe you need to look into setting the repositories default ACL in addition to setting its ACL.
That said, are you sure you really want to be using ACLs? If you're already using them and familiar with them that's great, but if you broke them out just to get 2-user-access working in Mercurial then you're betting off using a dedicated group (like developers) and the sticky group bit or using a single shared ssh account dev#unixhost with separate ssh private keys for each (see the SharedSSH page in the Mercurial wiki for an example).
ACLs are very powerful, but seldom necessary.
Note to other readers: Nothing we're saying in this question has anything to do with Mercurial's ACL Extension -- that's within Mercurial and this is file system ACL level stuff.
There's only releases for Debian-based systems (like Ubuntu), but you should check out mercurial-server. It handles access control for Mercurial repos in a flexible manner, but outside of filesystem ACLs.

Write a system life saver for ubuntu for restoring broken system to a working state

I am thinking of writing a system life saver application for ubuntu, which can restore system to an earlier state. This could be much useful in situations of system break.
User can create restore point before and then use them to restore their system.
This would be used for packages initially and then later on for restoring previous versions of files,somewhat like system restore functionality in microsoft windows.
Here is the idea page Idea page
I have gone through some ideas of implementing it like that which is done in windows, by keeping information about the files in the filesystem, the filesystem is intelligent enough to be used for this feature. But we don't have such file system available in linux, one such file system is brtfs but using this will lead to users creating partitions, which will be cumbersome. So I am thinking of a "copy-on-write and save-on-delete" approach. When a restore point is created I will create a new directory for backup like "backup#1" in the restore folder created by application earlier and then create hard links for the files needed to be restored. Now if any file is deleted from its original location I would have its hard link with me which can be used to restore the file, when needed. But this approach doesn't work on modification. For modification I am thinking of creating hooks in the file system (using redirfs ) which will call my attached callbacks which will check for the modifications in various parts of the files. I will keep these all changes in the database and then reverse the changes as soon as a restore is needed.
Please suggest me some efficient approaches for doing this.
Thanks
Like the comments suggested, the LVM snapshot ability provides a good basis for such an undertaking. It would work on a per-partition level and saves only sectors changed in comparison with the current system state. The LVM howto gives a good overview.
You'll have to set up the system from the very start with LVM, though, and leave sufficient space for snapshots.

In node.js how would I follow the Principle of Least Privilege?

Imagine a web application that performs two main functions:
Serves data from a file that requires higher privileges to read from
Serves data from a file that requires lower privileges to read from
My Assumption: To allow both files to be read from, I would need to run node using an account that could read both files.
If node is running under an account that can access both files, then a user who should not be able to read any file that requires higher privileges could potentially read those files due to a security flaw in the web application's code. This would lead to disastrous consequences in my imaginary web application world.
Ideally the node process could run using a minimal set of rights and then temporarily escalate those rights before accessing a system resource.
Questions: Can node temporarily escalate privileges? Or is there a better way?
If not, I'm considering running two different servers (one with higher privileges and one with lower) and then putting them both behind a proxy server that authenticates/authorizes before forwarding the request.
Thanks.
This is a tricky case indeed. In the end file permissions are a sort of meta-data. Instead of directly accessing the files, my recommendation would be to have some layer between the files in the form of a database table, or anything that could map the type of user to the file, and stream the file to the user if it exists.
That would mean that the so called web application couldn't just circumvent the file system permissions as easy. You could even set it up so that said files did not have server readable permissions, and instead were only readable by the in between layer. All it could do is make a call, and see if the user with given permissions could access the files. This lets you also share between multiple web applications should you choose. Also because of the very specific nature of what the in between layer does, you can enforce a very restricted set of calls.
Now, if a lower privileged user somehow gains access to a higher privileged user's account, they'll be able to see the file, and there's no way to really get around that short of locking the user's account. However that's part of the development process.
No, I doubt node.js-out of the box-could guarantee least privilege.
It is conceivable that, should node.js be run as root, it could twiddle its operating system privileges via system calls to permit or limit access to certain resources, but then again running as root would defeat the original goal.
One possible solution might be running three instances of node, a proxy (with no special permissions) to direct calls to one or the other two servers run at different privilege levels. (Heh, as you already mention. I really need to read to the end of posts before leaping into the fray!)

NodeJS: How would one watch a large amount of files/folders on the server side for updates?

I am working on a small NodeJS application that essentially serves as a browser based desktop search for a LAN based server that multiples users can query. The users on the LAN all have access to a shared folder on that server and are traditionally used to just placing files within that folder to sharing among everyone, and I want to keep that process the same.
The first solution I came across was the fs.watchFile which has been touched on in other stackoverflow questions. In the first question user Ivo Wetzel noted that on a linux system fs.watchFile uses inotify but, was of the opinion that fs.watchFile should not be used for large amounts of files/folders.
In another question about fs.watchFile user tjameson first reiterated that on Linux inotify would be used by fs.fileWatch and recommended to just use a combination of node-inotify-plusplus and node-walk but again stated this method should not be used for a large number of files. With a comment and response he suggested only watching the modified times of directories and then rescanning the relevant directory for file changes.
My biggest hurdles seem to be that even with tjameson's suggestion there is still a hard limit to the number of folders monitored (of which there are many and growing). Also it would have to be done recursively because the directory tree is somewhat deep and can also be subject to change at the lower branches so I would have to monitor the following at every folder level (or alternatively monitor the modified time of the folders and then scan to find out what happened):
creation of file or subfolder
deletion of file or subfolder
move of file or subfolder
deletion of self
move of self
Assuming the inotify has limits in line with what was said above then this alone to me seems like it may be too many monitors when I have a significant amount of nested subfolders. The real awesome way looks like it would involve kqueue which I subsequently found as a topic of discussion on a better fs.fileWatch in a google group.
It seems clear to me that keeping a database of the relevant file and folder information is the appropriate course of action on the query side of things, but keeping that database synchronized with the actual state of the file system under the directories of concern will be the challenge.
So what does the community think? Is there a better or well known solution for attacking this problem that I am just unaware of? Is it best just to watch all directories of interest for a single change e.g. modified time and then scan to find out what happened? Is it better to watch all the relevant inotify alerts and modify the database appropriately? Is this not a problem which is solvable by a peasant like me?
Have a look at monit. I use it to monitor files for changes in my dev environment and restart my node processes when relevant project files change.
I recommend you to take a look at the Dropbox API.
I implemented something similar with ruby on the client side and nodejs on the server side.
The best approach is to keep hashes to check if the files or folders changed.

Resources