Documentation of posix_acl_access and friends? - linux

I am trying to figure out how to get/set file access information through getxattr of "system.posix_acl_access". Suprisingly, a Google search resulted in no such link in the first pages. The man pages mention the attributes, but give no details beyond saying they are standard.
Assuming I wish to actually use them, is there any better option than reading the source for coreutils? I'm wondering what other attributes, under system or otherwise, I might be missing this way.
Edited to add:
The ACLs themselves are documented in POSIX 1003.1e (available for download here. It is an abandoned standard, but as Linux implements it, and coreutils (in particular, cp) uses it (at least the way it is compiled on Ubuntu), then it is relevent, standard or not.
This question, however, relates not to the particular entry, but for an extensive list of all standard extended attributes (though it seems, from reading sources, that on Linux only system.posix_acl_access and system.posix_acl_default exist).

Related

Difference of FICLONE vs FICLONERANGE vs copy_file_range (for copy-on-write support)

I wonder about an efficient way to copy files (on Linux, on a FS which supports copy-on-write (COW)).
Specifically, I want that my implementation uses copy-on-write if possible, but otherwise falls back to other efficient variants. Specifically, I also care about server-side copy (supported by SMB, NFS and others), and also zero-copy (i.e. bypassing the CPU or memory if possible).
(This question is not really specific to any programming language. It could be C or C++, but also any other like Python, Go or whatever has bindings to the OS syscalls, or has any way to do a syscall. If this is confusing to you, just answer in C.)
It looks like ioctl_ficlonerange, ioctl_ficlone (i.e. ioctl with FICLONE or FICLONERANGE) support copy-on-write (COW). Specifically FICLONE is used by GNU cp (here, via --reflink).
Then there is also copy_file_range, which also seems to support COW, and server-side-copy.
(LWN about copy_file_range.)
It sounds as if copy_file_range is more generic (e.g. it supports server-side-copy; not sure if that is supported by FICLONE).
However, copy_file_range seems to have some issues.
E.g. here, Paul Eggert comments:
[copy_file_range]'s man page
says it uses a size_t (not off_t) to count the number of bytes to be
copied, which is a strange choice for a file-copying API.
Are there situations where FICLONE would work better/different than copy_file_range?
Are there situations where FICLONE would work better/different than FICLONERANGE?
Specifically, assuming the underlying FS supports this, and assume you want to copy a file. I ask about the support of these functions for the functionality of:
Copy-on-write support
Server-side copy support
Zero-copy support
Are they (FICLONE, FICLONERANGE, copy_file_range) always performing exactly the same operation? (Assuming the underlying FS supports copy-on-write, and/or server-side copy.)
Or are there situations where it make sense to use copy_file_range instead of FICLONE? (E.g. COW only works with copy_file_range but not with FICLONE. Or the other way around. Or can this never happen?)
Or formulating the same question differently: Would copy_file_range always be fine, or are there situations where I would want to use FICLONE instead?
Why does GNU cp use FICLONE and not copy_file_range? (Is there a technical reason, or is this just historic?)
Related: GNU cp originally did not use reflink by default (see comment by the GNU coreutils maintainer Pádraig Brady).
However, that was changed recently (this commit, bug report 24400), i.e. COW behavior is the default now (if possible) (--reflink=auto).
Related question about Python for COW support.
Related discussion about FICLONE vs copy_file_range by Python developers. I.e. this seems to be a valid question, and it's not totally clear whether to use FICLONE or copy_file_range.
Related Syncthing documentation about the choice of methods for copying data between files, and
Syncthing issue about copy_file_range and others for efficient file copying, e.g. with COW support.
It also suggests that it is not so clear that FICLONE would do the same as copy_file_range, so their solution is to just try all of them, and fallback to the next, in this order:
ioctl (with FICLONE), copy_file_range, sendfile, duplicate_extents, standard.
Related issue by Go developers on the usage of copy_file_range.
It sounds as if they agree that copy_file_range is always to be preferred over sendfile.
(Question copied from here but I don't see how this is too less focused. This question is very focused and asks a very specific thing (whether FICLONE and copy_file_range behave the same), and should be extremely clear. I formulated the question in multiple different ways, to make the question even more clear. This question is also extremely well researched, and should already be very valuable to the community as-is with all the references. I would have been very happy if I would have found such a question by itself, even without answers, when I started researching about the differences between FICLONE and copy_file_range.)
See the Linux vfs doc about copy_file_range, remap_file_range, FICLONERANGE, FICLONE and FIDEDUPERANGE.
Then see
vfs_copy_file_range. This first tries to call remap_file_range if possible.
FICLONE calls ioctl_file_clone (here),
and FICLONERANGE calls ioctl_file_clone_range.
ioctl_file_clone_range calls the more generic ioctl_file_clone (here).
ioctl_file_clone calls vfs_clone_file_range (here).
vfs_clone_file_range calls do_clone_file_range and that calls remap_file_range (here).
I.e. that answers the question. copy_file_range is more generic, and anyway tries to call remap_file_range (i.e. the same as FICLONE/FICLONERANGE) first internally.
I think the copy_file_range syscall is slightly newer than FICLONE though, i.e. it might be possible that copy_file_range is not available in your kernel but FICLONE is.
In any case, if copy_file_range is available, it should be the best solution.
The order done by Syncthing (ioctl (with FICLONE), copy_file_range, sendfile, duplicate_extents, standard) makes sense.

Replacement for Linux Standard Base?

When distributing source code is not an option the Linux Standard Base allows a mechanism for achieving binary compatibility with many Linux distros.
http://www.linuxfoundation.org/collaborate/workgroups/lsb
However, it appears that it has been somewhat abandoned as an approach:
https://github.com/LinuxStandardBase/lsb/blob/master/README.md
I'd be interested in thoughts on the best path forward for maintaining binary compatibility. The document referenced above basically says "Linux is evolving so rapidly that it's not practical to have a standard base". I can appreciate that, but what are the other options? It's difficult to google this topic because you mostly get references about the LSB.

How are the Haddock module fields Portability, Stability and Maintainer used?

In lots of Haddock-generated module documentation (e.g. Prelude), a small box in the top-right can be seen, containing portability, stability and maintainer information:
From looking at the source code to such modules and experimentation, I confirmed that this information is generated from lines like the following in the module description:
-- Maintainer : libraries#haskell.org
-- Stability : stable
-- Portability : portable
There are several strange things about this:
The fields only seem to work in this order — any fields put out of order are simply treat as part of the module description itself. This is despite the fact that the order in the source file is the opposite of the order in the generated documentation!
I have been unable to find any official documentation of these fields. There is a Cabal package property named stability, the example values of which match the values I've seen in the equivalent Haddock fields, but beyond that, I've found nothing.
So: How are these fields intended to be used, and are they documented anywhere?
In particular, I'd like to know:
The full list of commonly-used values for Portability and Stability. This HaskellWiki page has a list, but I'd like to know where this list originated from.
The criteria for deciding whether a module is portable or non-portable. In particular, the package I would like the answers to these questions for, acme-strfry, is an FFI binding to strfry, a function only available in glibc. Is the package non-portable, because it only works on glibc systems, or portable, because it does not use any Haskell language extensions? The common usage seems to imply the latter.
Why a specific order of fields is required in the source file, and why it's the opposite of the ordering in the generated documentation.
Oh, I thought those fields were from the cabal package description. They don't seem to be documented at all on Haddock's docs. I've found this, which doesn't really answer your question but:
http://trac.haskell.org/haddock/ticket/71
So if it's freeform anyway, why not just write "non-portable (depends on glibc)"? I've seen even "portable (depends on ghc)", which is odd. I also wonder what happens with modules that were non-portable due to non-Haskell98 extension Foo, after Foo was added to Haskell 2010.
Note that the Cabal documenation you link to also says stability is freeform. Of course, even if haddock or cabal were to define what are the acceptable values, it'd still be up to the maintainer to subjectively select one.
About the specific order, you should probably just ask at the haddock mailing list, or check the source and file a bug.
PS: strfry is an invaluable contribution to the Haskell community, but it should be pure and portable, don't you think?
Ah yes, one of the more obscure and crufty features of Haddock.
As best as I can tell, it's just an undocumented hack. There's no sane reason why the order of the fields should matter, but it does. The specific choice of formatting (i.e., as a special form inside the module comment rather than as a separate block of some kind) isn't the best either. My guess is that somebody wanted to quickly add this feature one day, so they hacked up something minimal but functioning, and left it at that. (Without bothering to document it.)
Personally, I just don't bother with these fields at all. The information is available from Cabal, so I don't bother duplicating it in Haddock as well. Perhaps some day Cabal will pass this information to Haddock automatically...

Can a LabVIEW VI tell whether one of its output terminals is wired?

In LabVIEW, is it possible to tell from within a VI whether an output terminal is wired in the calling VI? Obviously, this would depend on the calling VI, but perhaps there is some way to find the answer for the current invocation of a VI.
In C terms, this would be like defining a function that takes arguments which are pointers to where to store output parameters, but will accept NULL if the caller is not interested in that parameter.
As it was said you can't do this in the natural way, but there's a workaround using data value references (requires LV 2009). It is the same idea of giving a NULL pointer to an output argument. The result is given in input as a data value reference (which is the pointer), and checked for Not a Reference by the SubVI. If it is null, do nothing.
Here is the SubVI (case true does nothing of course):
And here is the calling VI:
Images are VI snippets so you can drag and drop on a diagram to get the code.
I'd suggest you're going about this the wrong way. If the compiler is not smart enough to avoid the calculation on its own, make two versions of this VI. One that does the expensive calculation, one that does not. Then make a polymorphic VI that will allow you to switch between them. You already know at design time which version you want (because you're either wiring the output terminal or not), so just use the correct version of the polymorphic VI.
Alternatively, pass in a variable that switches on or off a Case statement for the expensive section of your calculation.
Like Underflow said, the basic answer is no.
You can have a look here to get the what is probably the most official and detailed answer which will ever be provided by NI.
Extending your analogy, you can do this in LV, except LV doesn't have the concept of null that C does. You can see an example of this here.
Note that the code in the link Underflow provided will not work in an executable, because the diagrams are stripped by default when building an EXE and because the RTE does not support some of properties and methods used there.
Sorry, I see I misunderstood the question. I thought you were asking about an input, so the idea I suggested does not apply. The restrictions I pointed do apply, though.
Why do you want to do this? There might be another solution.
Generally, no.
It is possible to do a static analysis on the code using the "scripting" features. This would require pulling the calling hierarchy, and tracking the wire references.
Pulling together a trial of this, there are some difficulties. Multiple identical sub-vi's on the same diagram are difficult to distinguish. Also, terminal references appear to be accessible mostly by name, which can lead to some collisions with identically named terminals of other vi's.
NI has done a bit of work on a variation of this problem; check out this.
In general, the LV compiler optimizes the machine code in such a way that unused code is not even built into the executable.
This does not apply to subVIs (because there's no way of knowing that you won't try to use the value of the indicators somehow, although LV could do it if it removes the FP when building an executable, and possibly does), but there is one way you can get it to apply to a subVI - inline the subVI, which should allow the compiler to see the outputs aren't used. You can also set its priority to subroutine, which will possibly also do this, but I wouldn't recommend that.
Officially, in-lining is only available in LV 2010, but there are ways of accessing the private VI property in older versions. I wouldn't recommend it, though, and it's likely that 2010 has some optimizations in this area that older versions did not.
P.S. In general, the details of the compiling process are not exposed and vary between LV versions as NI tweaks the compiler. The whole process is supposed to have been given a major upgrade in LV 2010 and there should be a webcast on NI's site with some of the details.

Best practices to put into a man page

Is there a best practices guideline for writing man pages?
What should be included in the layout? The standard ones are:
NAME
SYNOPSIS
DESCRIPTION
EXAMPLES
SEE ALSO
There are others like OPTIONS, AUTHOR.
As a user what would be useful to have? What isn't helpful?
If you cannot find any old bound copies of 1970s Bell Labs "troff" documentation, which had some nice sections on writing man pages, :-) then I'd suggest trying out Jens's "HOWTO" on writing man pages over at his site.
The Unix 7th Edition manuals are available online in a variety of formats.
A BUGS section is nice, and an EXAMPLES section is always useful. Some man pages contain a
FILES section which lists related configuration files, or ENVIRONMENT section detailing any influential environment variables.
To be clear, what sections or type of information are useful to users depends on the nature of the program or utility that you are documenting.
There is a canonical man page outline distributed with UNIX systems, or at least usually there is. In general, I'd put in all the fields, and include a line like "none" if it doesn't apply.
One thing which people sometimes forget to put in manual pages is the meaning of the return value of the function. It's easy to forget, but the omission can make life much harder for people who have to use your function. Also, a simple code segment in the SYNOPSIS or a good minimal working EXAMPLE is very useful.
One thing that I often do with man pages is to try to find a related command, even though I know the thing I'm looking at doesn't do what I want. In this case, the SEE ALSO is great.
It depends on what your software does. If it is a small stand-alone application, I would certainly put the AUTHOR section in the man page so that if users find bugs they can easily find an email address to report the bug to you.
As for best practices, none that I know of, other than that the man page should be concise, detailed but not contain too much information that is not required, if it is just a tool the inner workings are not required for example.

Resources