Im implementing NFS and almoste done but the RFC section 3.3.8 says this in its description:
mode
One of UNCHECKED, GUARDED, and EXCLUSIVE. UNCHECKED
means that the file should be created without checking
for the existence of a duplicate file in the same
directory. In this case, how.obj_attributes is a sattr3
describing the initial attributes for the file. GUARDED
specifies that the server should check for the presence
of a duplicate file before performing the create and
should fail the request with NFS3ERR_EXIST if a
duplicate file exists. If the file does not exist, the
request is performed as described for UNCHECKED.
EXCLUSIVE specifies that the server is to follow
exclusive creation semantics, using the verifier to
ensure exclusive creation of the target. No attributes
may be provided in this case, since the server may use
the target file metadata to store the createverf3
verifier.
so the question if UNCHECKED is the mode should i just set the length of the file to Zero or should i let the file be as it is? and if its a directory should i remove all the content?
I believe the idea of CREATE with UNCHECKED is to apply the semantics of good old Unix system call creat -- so, truncation of a file's existing contents (if any) is implied. However I cannot find this specified all that clearly in the docs (!).
Trying to CREATE an existing directory is an error in any case -- there's a separate MKDIR for that (in NFS 3, the same applies to special files, with MKNOD -- CREATE is now for regular, normal, plain good old files only!-)
Related
The Infra team in my company has provided us with sample overthere.SshHost under 'Infrastructure' in XL-Deploy UI that has a predefined private key file and passphrase which is not shared with us.
We are asked to duplicate this file manually in the UI, rename it and create infra entries for our application.
How can I achieve this with puppet?
Lets say the sample file is placed under: Infrastructure/Project1/COMMONS/Template_SshHost
and I need to create an overthere.SshHost under Infrastructure/Project1/UAT/Uat_SshHost and Infrastructure/Project1/PREPROD/Preprod_SshHost by copying the sample file.
Thanks in advance!
You can sync a target file with another file accessible via the local file system by using a File resource whose source attribute specifies the path to the original. You can produce a modified copy in a variety of ways, such as by applying one or more File_line resources (from stdlib) or by applying an appropriate script via an Exec resource.
But if you go that route then you have to either
accept that the target file will be re-synced on every Puppet run, OR
set the File resource's replace attribute to false, in which case changes to the original file will not be propagated into the customized copy.
The latter is probably the more acceptable choice for most people. Its file-copying part might look something like this:
$project_dir = '/path/to/Infrastructure/Project1'
file { "${project_dir}/UAT/Uat_SshHost/overthere.SshHost":
ensure => 'file',
source => "${project_dir}/COMMONS/Template_SshHost/overthere.SshHost",
replace => false,
}
But you might want to consider instead writing a custom type and provider for the target file. That would allow you to incorporate changes from the original template without re-syncing the file on every run, and it would give you a lot more flexibility with respect to the customizations you need to apply. It would also present a simpler interface for you to use in your manifests, which could make managing these easier. But, of course, that's offset by the cost is that writing and maintaining a custom type and provider. Only you can determine whether that would be a worthwhile trade-off.
When I am reading the document for rename in the page https://linux.die.net/man/3/rename, i found the following
If the link named by the new argument exists, it shall be removed and old renamed to new. In this case, a link named new shall remain visible to other processes throughout the renaming operation and refer either to the file referred to by new or old before the operation began. Write access permission is required for both the directory containing old and the directory containing new.
How should I understand the following
refer either to the file referred to by new or old before the operation began
in this case a file with the same name with what new points exists, then after the rename operation, the new should points to either the old or the new. But the document says it is before the operation began which makes me confused.
How should I understand this? Could you give me an example?
What this phrase means is that, during rename, the old new is replaced with the new new atomically.
What this means is that there is no point during the rename operation where trying to access new will result in a file not found error. Every access will result in either the old or the new new being returned.
After the rename is done (assuming it finished successfully), of course the new new will be referenced under that name.
This highlights rename's usefulness in atomically replacing files. If you have a path containing some important file, and you need to update that file such that, no matter what happens, anyone any time that opens /var/lib/important will get either the old or the new version, this is the sequence of operations you need to do:
Create an updated version of the file with the path /var/lib/important.new.
Flush and close the /var/lib/important.new.
rename("/var/lib/important.new", "/var/lib/important");
Depending on your use case, flush /var/lib.
This guarantees that no matter what happens (process crash, power failures, kernel faults), either the old or the new file are available, complete and correct.
That last step (flushing the directory) is only necessary if you need to rely on it being the new version of the file that is available. If you do not do it, a power failure might cause the old file to re-appear after a restart. Typical uses don't bother with this step.
everyone,
when I deploy my package to a linux environment, I met this error:
.../Linux-2.6c2.5-i686/Ncurses/Ncurses-15766.0-0/lib/libncurses.so.5 is encountered a second time at /apollo/_env/FBAMerchantAutoRemovalDaemon-swit1na.1755067.237551097.1107633519/perl/lib/perl5.8-dist/File/Find.pm line 542.
though I read the perl script, I have no idea what is wrong. I suspect my environment is tainted. Does anyone have idea what is wrong and how can I debug this problem? Thanks a lot in advance!
Zhe
From perldoc File::Find
follow
Causes symbolic links to be followed. Since directory trees with symbolic links (followed) may contain files more than once and may even have cycles, a hash has to be built up with an entry for each file. This might be expensive both in space and time for a large directory tree. See "follow_fast" and "follow_skip" below. If either follow or follow_fast is in effect:
It is guaranteed that an lstat has been called before the user's wanted() function is called. This enables fast file checks involving _. Note that this guarantee no longer holds if follow or follow_fast are not set.
There is a variable $File::Find::fullname which holds the absolute pathname of the file with all symbolic links resolved. If the link is a dangling symbolic link, then fullname will be set to undef.
So, if, for the purposes of your application, if it is OK to follow symlinks, invoke find with the follow option set:
find({ wanted => \&process, follow => 1 }, $dir);
Or, consider if one of the other follow_skip behaviors is more appropriate for your application:
follow_skip
follow_skip==1, which is the default, causes all files which are neither directories nor symbolic links to be ignored if they are about to be processed a second time. If a directory or a symbolic link are about to be processed a second time, File::Find dies.
follow_skip==0 causes File::Find to die if any file is about to be processed a second time.
follow_skip==2 causes File::Find to ignore any duplicate files and directories but to proceed normally otherwise.
It may be that follow_skip => 2 is more appropriate for your application. Only you can make that decision.
Is it possible to open a file knowing its inode?
ls -i /tmp/test/test.txt
529965 /tmp/test/test.txt
I can provide path, inode (above 529965) and I am looking to get in return a file descriptor.
This is not possible because it would open a loophole in the access control rules. Whether you can open a file depends not only on its own access permission bits, but on the permission bits of every containing directory. (For instance, in your example, if test.txt were mode 644 but the containing directory test were mode 700, then only root and the owner of test could open test.txt.) Inode numbers only identify the file, not the containing directories (it's possible for a file to be in more than one directory; read up on "hard links") so the kernel cannot perform a complete set of access control checks with only an inode number.
(Some Unix implementations have offered nonstandard root-only APIs to open a file by inode number, bypassing some of the access-control rules, but if current Linux has such an API, I don't know about it.)
Not exactly what you are asking, but (as hinted by zwol) both Linux and NetBSD/FreeBSD provide the ability to open files using previously created “handles”: These are inode-like persistent names that identify a file on a file system.
On *BSD (getfh and fhopen) using this is as simple as:
#include <sys/param.h>
#include <sys/mount.h>
fhandle_t file_handle;
getfh("<file_path>", &file_handle); // Or `getfhat` for the *at-style API
// … possibly save handle as bytes somewhere and recreate it some time later …
int fd = fhopen(&file_handle, O_RDWR);
The last call requiring the caller to be root however.
The Linux name_to_handle_at and open_by_handle_at system calls are similar, but a lot more explicit and require the caller to keep track of the relevant file system mount IDs/UUIDs themselves, so I'll humbly link to the detailed example in the manpage instead. Beware, that the example is not complete if you are looking to persist the handles across reboots; one has to convert the received mount ID to a persistent filesystem identifier, such as a filesystem UUID, and convert that back to a mount ID later on. In essence they do the same however. And just like on *BSD using the later system call requires elevated privileges (CAP_DAC_READ_SEARCH to be exact).
I need to write a Puppet script to manage the directory /foo/bar such that:
the file mode on /foo/bar is 777, but the permissions of everything within the directory are not managed by Puppet.
the owner/group on /foo/bar and everything within it is baz.
That is, the first requirement is non-recursive, but the second attribute is recursive.
Puppet provides a single recursive attribute, which affects the behavior of owner, group, and mode simultaneously. This means that I cannot specify the desired behavior using a single resource declaration.
I tried using two resource declarations, but then I get the error
Error: Duplicate declaration: File[/foo/bar] is already declared in file /my/puppet/file.pp at line XX; cannot redeclare
Yes, this will not work. Mind that Puppet is not a scripting engine, but a tool to model your desired state.
You will therefor have to decide how you want to manage your directory: As a single file system entry (recurse => false) or a whole tree (recurse => true). In the latter case, Puppet will always manage all properties for which you are passing values.
In your situation, you will likely have to fall back to the workaround of managing the permissions of the directory itself through a different resource, likely an exec resource that calls chmod, independently of the file resource. The latter must not pass a value for mode in this constellation, otherwise the two resources will always work against one another.
It's no ideal, but Puppet is not well equipped to deal with your specific requirements.