perforce backup question - perforce

for safety purpose, is it enough to backup all the files under perforce server directory?

Short answer: No
Long answer: All you need to know about backup and recovery of Perforce data is detailed in the Manual. In a nutshell for the impatient:
p4 verify //...
(Verify the integrity of your server)
p4 admin checkpoint
(Make a checkpoint; make sure that this step is successful)
back up the checkpoint file and the old journal file
(if you run Perforce with Journal files, which you should)
back up your versioned files
(that's the actual data, not to be confused with the db.* files in the Perforce server directory.)
But please do read the manual, especially about the various restore scenarios. Remember:
Backups usually work fine, it's the restore that fails.

In addition to jhwist's correct from the p4 manual answer (permalink) I would like to add a few things that I've learnt during using Perforce for several years.
...
Depending on the size of your repository performing a verify on the p4 database can take several hours, which during it will be locked and no one will be able to perform any queries. Locking the P4 database can have several on flow effects to your users, for example: if someone is using or attempts to use P4 during this time a P4SCC plug-in (ie. for visual studio integration) it will spin and the user will eventually have to force quit to regain control.
Solution
Spawn a second instance of P4D on a different port (p4d_2)
Suspend/terminate the main instance (p4d_1).
Perform the p4 verify //... and checkpoint using p4d_2.
Backup the physical version files on the storage array.
Kill p4d_2.
Restart p4d_1.
Also: As this will be more than likely be an automated process run at night or over the weekend can cannot stress enough that you need to thoroughly read the checkpoint log file to ensure that it was successful otherwise you will be in a difficult spot when you need to perform a restore (read the next point). Backup should not be a set and forget procedure.
Further information about Perforce backup can be found in Perforce whitepaper: High Availability And Disaster Recovery Solutions For Perforce.
HTH,

FWIW I have used an additional backup strategy on my own development workstation. I have a perl script that runs every night and finds all files that I have checked out of Perforce from a given list of workspaces. That list of files is then backed up as part of my normal workstation backup procedure. The Perl script to find the files that are checked out looks pretty tricky to me. I did not write it and am not particularly familiar with Perl.
If anyone is interested, I can post the script here along with how I call it.
Note that this script was developed before Perforce came out with its "shelving" capability. I might be better off now to have a script that "shelves" my work every night (either in addition to my current backup strategy or in place of it).
Here is the script:
# This script copies any files that are opened for any action (other than
# delete) in the specified client workspace to another specified directory.
# The directory structure of the workspace is duplicated in the target
# directory. Furthermore, a file is not copied if it already exists in the
# target directory unless the file in the workspace is newer than the one
# in the target directory.
# Note: This script looks at *all* pending changelists in the specified
# workspace.
# Note: This script uses the client specification Root to get the local
# pathname of the files. So if you are using a substituted drive for the
# client root, it must be properly substituted before running this script.
# Argument 1: Client workspace name
# Argument 2: Target directory (full path)
use File::Path;
# use File::Copy;
use File::Basename;
use Win32;
if ($#ARGV != 1) {
die("usage: $0 client_name target_directory\n");
}
my $client = shift(#ARGV);
my $target_dir = shift(#ARGV);
my #opened_files = ();
my $client_root = "";
my $files_copied = 0;
# I need to know the root directory of the client, so that I can derive the
# local pathname of the file. Strange that "p4 -ztag opened" doesn't give
# me the local pathname; I would have expected it to.
open(CLIENT_SPEC, "p4 -c $client client -o|")
|| die("Cannot retrieve client specification: $!");
while (<CLIENT_SPEC>) {
my ($tag, $value) = split(/\s/, $_, 2);
if ($tag eq "Root:") {
$value = chop_line($value);
$client_root = $value;
}
}
close(CLIENT_SPEC);
if ($client_root eq "") {
die("Unable to determine root of client $client\n");
} elsif (substr($client_root, -1) ne "\\") {
$client_root = $client_root . "\\";
}
# Use the -ztag option so that we can get the client file path as well as
# the depot path.
open(OPENED_FILES, "p4 -c $client -ztag opened|")
|| die("Cannot get list of opened files: $!");
while (<OPENED_FILES>) {
# What we do is to get the client path and append it onto the
# #opened_files array. Then when we get the action, if it is a delete,
# we pop the last entry back off the array. This assumes that the tags
# come out with clientFile before action.
$_ = chop_line($_);
my ($prefix, $tag, $value) = split(/\s/, $_, 3);
if ($tag eq "clientFile") {
push(#opened_files, $value);
}
if ( ($tag eq "action") && ($value eq "delete") ) {
pop(#opened_files);
}
}
close(OPENED_FILES);
# Okay, now we have the list of opened files. Process each file to
# copy it to the destination.
foreach $client_path (#opened_files) {
# Trim off the client name and replace it with the client root
# directory. Also replace forward slashes with backslashes.
$client_path = substr($client_path, length($client) + 3);
$client_path =~ s/\//\\/g;
my $local_path = $client_root . $client_path;
# Okay, now $client_path is the partial pathname starting at the
# client's root. That's the path we also want to use starting at the
# target path for the destination.
my $dest_path = $target_dir . "\\" . $client_path;
my $copy_it = 0;
if (-e $dest_path) {
# Target exists. Is the local path newer?
my #target_stat = stat($dest_path);
my #local_stat = stat($local_path);
if ($local_stat[9] > $target_stat[9]) {
$copy_it = 1;
}
} else {
# Target does not exist, definitely copy it. But we may have to
# create some directories. Use File::Path to do that.
my ($basename, $dest_dir) = fileparse($dest_path);
if (! (-e $dest_dir)) {
mkpath($dest_dir) || die("Cannot create directory $dest_dir\n");
}
$copy_it = 1;
}
if ($copy_it) {
Win32::CopyFile($local_path, $dest_path, 1)
|| warn("Could not copy file $local_path: $!\n");
$files_copied++;
}
}
print("$files_copied files copied.\n");
exit(0);
################ Subroutines #########################################
# chop_line removes any trailing carriage-returns or newlines from its
# argument and returns the possibly-modified string.
sub chop_line {
my $string = shift;
$string =~ s/[\r\n]*\z//;
return $string;
}
To run:
REM Make sure that we are pointing to the current Perforce server
P4 set -s P4PORT=MyPerforceServer:ThePortThatPerforceIsOn
p4 set p4client=MyPerforceWorkspace
REM Copy checked out files to a local directory that will be backed up
.\p4backup.pl MyPerforceWorkspace c:\PerforceBackups\MyPerforceWorkspace_backup

Related

SSH Create Directory In Remote Site Using Perl Script

Previously I have asked a question here on how to determine whether a path is a directory or not in remote site using SSH. I wish to create the directory if the path is not a directory. I have tried following code with two ways but it seem not to be working. Thanks for everyone that helps here.
use File::Path;
my $destination_path = "<path>";
my $ssh = "usr/bin/ssh";
my $user_id = getpwuid( $< );
my $site = "<site_name>";
my $host = "rsync.$site.com";
if (system("$ssh $user_id\#$host [ -d $destination_path ]") == 0) {
print "It is a directory.\n";
} else {
print "It is not a directory.\n";
#First Way
if(system("$ssh $user_id\#$host [ make_path ($d_path_full) ]") == 0{
#Second Way
if(system("$ssh $user_id\#$host [ mkdir -p $d_path_full ]") == 0{
print "Create directory successfully.\n";
} else {
print "Create directory fail.\n";
}
}
The bracket(s), single [ or the pair [ ], is a builtin in bash which is a test operator (see man test), and the last use of it is incorrect. But you don't need it to make a directory
use warnings;
use strict;
use feature 'say';
my $ssh = '/usr/bin/ssh';
my $user_id = ...
my $host = ...
my $to = quotemeta $user_id.'#'.$host;
my $cmd = 'mkdir -p TEST_MKDIR_OVER_SSH';
system("$ssh $to $cmd") == 0 or die "Can't mkdir: $!";
The mkdir is quiet with -p if a directory already exists, and it returns succes what also defeats the purpose of [ ] (if that was the intent). But an actual error -- a file with that name exists, no permissions on the path, etc -- does make its way back to the script, as you'd want, and a string with the error message is in $! so please test for this.
If you simply wish to know whether the directory existed please put back your test branch, or just omit -p and analyze the $! for what that message is on your system.
As for the second attempt: the command to be executed runs on the remote system and has nothing to do with this script anymore (apart from interpolated variables). So Perl functions or libraries from this script make no sense in that command.
For the next step I suggest to look into modules for (preparing and) running external commands, that are much more helpful than the bare system.
Some, from simple to more capable: IPC::System::Simple, Capture::Tiny, IPC::Run3, IPC::Run. Also see String::ShellQuote, to prepare commands and avoid quoting issues, shell injection bugs, and other problems. This recent post is a good example, and there's a lot more out there.
I would recommend using a proper module to do SSH, namely Net::OpenSSH, a SSH client built upon OpenSSH.
While being implemented in pure Perl, it is fast and stable, and has no mandatory dependency (apart of course, OpenSSH binaries).
As explained in the docs, it will, under certain conditions, automatically quote any shell metacharacters in the command lists.
The following codes demonstrates how it can respond to your use case. It relies on the same shortcut explained by #zdim, using mkdir -p :
if the directory does not exists, it gets created (if that fails, an error happends)
if it already exists, nothing happens
if a file exists with the target name, an error happens
Code :
use warnings;
use strict;
use Net::OpenSSH;
my $host = ...;
my $user_id = ...;
my $destination_path = ...;
# connect
my $ssh = Net::OpenSSH->new($host, user => $user_id);
$ssh->error and die "Can't ssh to $host: " . $ssh->error;
# try to create the directory
if ( $ssh->system('mkdir', '-p', $destination_path) ) {
print "dir created !\n";
} else {
print "can't mkdir $dir on $host : " . $ssh->error . "\n";
}
# disconnect
undef $ssh;

Linux script variables to SCP and delete files

I am looking to set up a script to do the following:
1st: SCP a directory on the first day of month to another server
2nd: Delete the directory after successful transfer
The directory I need to move will always have a different name, and the lowest numbered one is always the one that needs to move:
2018/files/02/
2018/files/03/
So what im looking to write up is something like:
scp /2018/files/% user#host:/backups/2018/files/
{where % = lowest num} &&
rm -rf /2018/files/%
{where % = lowest num} &&
exit
Thanks for any advice
If you are open to using Ruby, you could accomplish it with something like this:
def file_number(filespec)
filespect.split('/').last.to_i
end
directories = Dir['/2018/files'].select { |f| File.directory?(f) }
sorted_dirs = directories.sort_by do |dir1, dir2|
file_number(dir1) <=> file_number(dir1)
end
dir_to_copy = sorted_dirs.first
destination_dir = File.join('/', 'backups', dir_to_copy)
`scp #{dir_to_copy} user#host:#{destination_dir}`
`rm -rf #{dir_to_copy}`
I have not tested this, but if you have any problems, let me know what they are and I can work through it with you.
While using shell scripting eliminates the need for the Ruby interpreter, to me the code is not nearly as straightforward.
In very large directory lists (maybe 10,000's?) the sort might be intolerably slow, and another method would be needed to optimize for speed.
I would caution you against doing an unconditional rm -rf after the backup -- that seems really risky to me.
The big challenge here is to actually find the right files to copy, and shudder, delete. So let us call that step 0.
Let's start with some boiler plate
sourceD=/2018/files/
targetD=/backups/2018/files/
And a little assertion, which bails out from the script if $1 does not equate to a directory.
assert_directory() { (cd ${1:?directory name}) || exit; }
step 0: Identify directory:
assert_directory $sourceD
to_be_archived=$(
# source must be two characters, hence "??"
# source must a directory, hence trailing "/"
# set -- sorts its arguments
# First match must be our source
set -- $sourceD/??/ &&
assert_directory "$1"
echo ${1:?nothing found}
) || exit
This is only a couple of lines of condensed code. Note that this may
cause trouble if you (accidentally) run this multiple times in a row.
Step 1, Copy files now appears to be the easy part.
scp -r ${to_be_archived:?} user#host:${targetD:?}
This is a simple method for copying files, but also slow and risky.
Lookup rsync over ssh for alternatives.
Step 2, Remove
The rm -fr line will do the job, but I won't include that here.
We are missing an essential step, as we need to make sure that our
files have arrived safely. Again, rsync has options for that.
In summary:
assert_directory() { (cd ${1:?directory name}) || exit; }
assert_directory $sourceD
to_be_archived=$(
set -- $sourceD/??/ &&
assert_directory "$1"
echo ${1:?nothing found}
) || exit
This will give you the first two-character name directory (if one exists) in sourceD or abort the running script. It will break if $sourceD contains spaces.

Can I customize the svnadmin create process?

I have many different repositories setup on my server. I need to have an identical post-commit hook file in every one of those repos. Simple enough for existing, but is there a way to have calls to svnadmin create automatically copy a post-commit stub file to the new hooks directory? Essentially I'm looking for a post-svnadmin-create hook. Thanks!
I think your best bet would be to wrap the call to svnadmin create in a script that creates the hooks after the repo.
Agreed as long as there is not some built-in way, which there seems not to be. I would have expected subversion to sport something like the customizable skeleton directory for new Linux users. Too bad.
Here is my wrapper with comments if anyone can find it useful - should be fairly extendable. If anyone notices any glaring gotchas in it, don't hesitate - I'm neither a bash nor Linux expert but I think I got most of it covered, and it works :)
# -----------------------------------------------------------------------
# A wrapper for svnadmin to allow post operations following repo creation - copying custom
# hook files into repo in this case. This should be run as root.
# capture input args; note that args[0] == $#[1] (this script name is not captured here)
args=("$#");
# redirect args to svnadmin in all cases - this script should not modify the behavior of svnadmin.
# note: the original binary "/binary_path/svnadmin" has been renamed "/binary_path/svnadmin-wrapped" and
# this script was then named "/binary_path/svnadmin" and given identical user:group & permissions as
# the original.
sudo -u svnuser svnadmin-wrapped ${args[#]};
# capture return code so we can return on exit; svnadmin returns 0 for success
eCode=$?;
# find out if sub-command to svnadmin was "create" and, if so, note the index of the directory arg,
# which is not necessarily going to be in the same position each time (options may be specified
# before the sub-command).
path_idx=0;
found=0;
for i in ${args[#]}
do
# track index; pre-incerement
((path_idx++));
if [ $i == "create" ]
then
# found repo path
((found++));
break;
fi
done
# we now know if the subcommand was create and where the repo path is - finish up as needed.
# note that this block assumes that our hook file stubs are /stub_path/ (owned by root)
# and that there exists a custom log file at /stub_path/cust-log (also owned by root).
d=`date`;
if [ $found != 0 ]
then
# check that the command succeeded
if [ $eCode == 0 ]
then
# check that the directory exists
if [ -d "${args[$path_idx]}/hooks" ]
then
# copy our custom hooks into place
sudo -u svnuser cp "/stub_path/post-commit" "${args[$path_idx]}/hooks/post-commit";
sudo -u svnuser cp "/stub_path/post-revprop-change" "${args[$path_idx]}/hooks/post-revprop-change";
else
# unlikey failure; set custom error code here; log issue
echo "$d svnadmin wrapper error: svnadmin 'create' succeeded but the 'hooks' directory was not found! Params: ${args[#]}" >> "/stub_path/cust-log";
let "eCode=1325";
fi
else
# tried to create but svnadmin failed; log issue
echo "$d svnadmin wrapper error: svnadmin 'create' was called but failed! Params: ${args[#]}" >> "/stub_path/cust-log";
fi
fi
exit $eCode;
-Thanks to all who host and post!

find based filename autocomplete in Bash script

There is a command line feature I've been wanting for a long time, and I've thought about how to best realize it, but I got nothing...
So what I'd like to have is when I start typing a filename and hit tab,for example:
# git add Foo<tab>
I'd like it to run a find . -name "*$1*" and basically autocomplete the complete path to the matched File to my command line.
What I have so far:
I know I'll have to write a function that will call the app with the parameters I want,
for example git add. After that it needs to catch the tab-keystroke event and do the find mentioned above, and display the results if many, or fill in the result if one.
What I haven't been able to figure out:
How to catch the tab key event within a function within function.
So basically in pseudocode:
gadd() {git add autocomplete_file_search($1)}
autocomplete_file_search(keyword) {
if( tab-key-pressed ){
files = find . -name "*$1*";
if( filecount > 1 ) {
show list;
}
if( files == 1 ) {
return files
}
}
}
Any ideas?
thanks.
Matching anywhere in the filename is rather complicated, and I'm not sure it's really all that useful. Matching at the start of filenames makes more sense and is much easier to implement, even recursively.
Now, you mentioned find as a requirement, but bash (since version 4.0) can also find files recursively, and it should be more efficient to let bash do that part. To match recursively in bash, you enable the globstar shell option by running shopt -s globstar, then two consecutive asterisks, **, will match recursively.
Next up, given that you want to match files recursively inside a git repository, we best have a way to detect that we're actually in a git repository, otherwise, if you accidentally trigger it in / for instance, your prompt will hang while waiting for bash to search through your entire filesystem. The following function should be fairly efficient at determining if we're inside a git repository. Given the current working directory, e.g. /foo/bar/baz, it'll look for /foo/bar/baz/.git, /foo/bar/.git, /foo/.git, /.git and return true if it finds one, false otherwise.
isgit() {
local p=$PWD
while [[ $p ]]; do
[[ -d $p/.git ]] && return
p=${p%/*}
done
return 1
}
For simplicity, we'll create a gadd command to add the completions for. A completion function can only be applied to the first word of the command. E.g. we can add completion for git, but not git add, thus we'll make a new command that turns git add into one word.
gadd() {
git add "$#"
}
Now for the actual completion function. When triggered by hitting TAB, the function will be invoked with three arguments. $1 is the command being completed, $2 is the current word of the command line being completed, and $3 is the previous word on the line. So the files we want to search will be matched by the glob **/"$2"*; all files starting with "$2". We iterate these filenames, and append them to the COMPREPLY array. If the COMPREPLY array only contains one value when the function is done, the word will be replaced by that value. If it contains more than one value, hit tab another time to get a list of all the matches.
shopt -s globstar
_git_add_complete() {
local file
isgit || return
for file in **/"$2"*; do
# If the glob doesn't match, we'll get the glob itself, so make sure
# we have an existing file
[[ -e $file ]] || continue
# If it's a directory, add a trailing /
[[ -d $file ]] && file+=/
COMPREPLY+=( "$file" )
done
}
complete -F _git_add_complete gadd
Add the above three code blocks to your ~/.bashrc, then open a new terminal, enter a git repository and try gadd something<tab>.
You should take a look at this introduction to bash completion. Briefly, bash has a system for configuring and extending tab completion. Other shells do this, too, and each one has a different way to set it up. Using this system it is not necessary to do everything yourself and adding custom argument completion to a command is relatively easy.
Does this work?
$ cat .bash_completion
_foo()
{
local files
cur=${COMP_WORDS[COMP_CWORD]}
local files=$(for x in `find -type f`; do echo ${x}; done)
COMPREPLY=( $( compgen -W "${files}" -- ${cur} ) )
return 0
}
complete -F _foo foo
$ . /etc/bash_completion
$ foo ./[tab]
I wrote git-number so that I never have to hit tab when specifying files to git.
With git-number I can use numbers to represent the filenames that I want git to handle.

Relinking an anonymous (unlinked but open) file

In Unix, it's possible to create a handle to an anonymous file by, e.g., creating and opening it with creat() and then removing the directory link with unlink() - leaving you with a file with an inode and storage but no possible way to re-open it. Such files are often used as temp files (and typically this is what tmpfile() returns to you).
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to my compulsive neatness. ;)
When poking through the relevant system call functions I expected to find a version of link() called flink() (compare with chmod()/fchmod()) but, at least on Linux this doesn't exist.
Bonus points for telling me how to create the anonymous file without briefly exposing a filename in the disk's directory structure.
A patch for a proposed Linux flink() system call was submitted several years ago, but when Linus stated "there is no way in HELL we can do this securely without major other incursions", that pretty much ended the debate on whether to add this.
Update: As of Linux 3.11, it is now possible to create a file with no directory entry using open() with the new O_TMPFILE flag, and link it into the filesystem once it is fully formed using linkat() on /proc/self/fd/fd with the AT_SYMLINK_FOLLOW flag.
The following example is provided on the open() manual page:
char path[PATH_MAX];
fd = open("/path/to/dir", O_TMPFILE | O_RDWR, S_IRUSR | S_IWUSR);
/* File I/O on 'fd'... */
snprintf(path, PATH_MAX, "/proc/self/fd/%d", fd);
linkat(AT_FDCWD, path, AT_FDCWD, "/path/for/file", AT_SYMLINK_FOLLOW);
Note that linkat() will not allow open files to be re-attached after the last link is removed with unlink().
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to the my compulsive neatness. ;)
If this is your only goal, you can achieve this in a much simpler and more widely used manner. If you are outputting to a.dat:
Open a.dat.part for write.
Write your data.
Rename a.dat.part to a.dat.
I can understand wanting to be neat, but unlinking a file and relinking it just to be "neat" is kind of silly.
This question on serverfault seems to indicate that this kind of re-linking is unsafe and not supported.
Thanks to #mark4o posting about linkat(2), see his answer for details.
I wanted to give it a try to see what actually happened when trying to actually link an anonymous file back into the filesystem it is stored on. (often /tmp, e.g. for video data that firefox is playing).
As of Linux 3.16, there still appears to be no way to undelete a deleted file that's still held open. Neither AT_SYMLINK_FOLLOW nor AT_EMPTY_PATH for linkat(2) do the trick for deleted files that used to have a name, even as root.
The only alternative is tail -c +1 -f /proc/19044/fd/1 > data.recov, which makes a separate copy, and you have to kill it manually when it's done.
Here's the perl wrapper I cooked up for testing. Use strace -eopen,linkat linkat.pl - </proc/.../fd/123 newname to verify that your system still can't undelete open files. (Same applies even with sudo). Obviously you should read code you find on the Internet before running it, or use a sandboxed account.
#!/usr/bin/perl -w
# 2015 Peter Cordes <peter#cordes.ca>
# public domain. If it breaks, you get to keep both pieces. Share and enjoy
# Linux-only linkat(2) wrapper (opens "." to get a directory FD for relative paths)
if ($#ARGV != 1) {
print "wrong number of args. Usage:\n";
print "linkat old new \t# will use AT_SYMLINK_FOLLOW\n";
print "linkat - <old new\t# to use the AT_EMPTY_PATH flag (requires root, and still doesn't re-link arbitrary files)\n";
exit(1);
}
# use POSIX qw(linkat AT_EMPTY_PATH AT_SYMLINK_FOLLOW); #nope, not even POSIX linkat is there
require 'syscall.ph';
use Errno;
# /usr/include/linux/fcntl.h
# #define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
# #define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
# #define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
unless (defined &AT_SYMLINK_NOFOLLOW) { sub AT_SYMLINK_NOFOLLOW() { 0x0100 } }
unless (defined &AT_SYMLINK_FOLLOW ) { sub AT_SYMLINK_FOLLOW () { 0x0400 } }
unless (defined &AT_EMPTY_PATH ) { sub AT_EMPTY_PATH () { 0x1000 } }
sub my_linkat ($$$$$) {
# tmp copies: perl doesn't know that the string args won't be modified.
my ($oldp, $newp, $flags) = ($_[1], $_[3], $_[4]);
return !syscall(&SYS_linkat, fileno($_[0]), $oldp, fileno($_[2]), $newp, $flags);
}
sub linkat_dotpaths ($$$) {
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(DOTFD, $_[0], DOTFD, $_[1], $_[2]);
close DOTFD;
return $ret;
}
sub link_stdin ($) {
my ($newp, ) = #_;
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(0, "", DOTFD, $newp, &AT_EMPTY_PATH);
close DOTFD;
return $ret;
}
sub linkat_follow_dotpaths ($$) {
return linkat_dotpaths($_[0], $_[1], &AT_SYMLINK_FOLLOW);
}
## main
my $oldp = $ARGV[0];
my $newp = $ARGV[1];
# link($oldp, $newp) or die "$!";
# my_linkat(fileno(DIRFD), $oldp, fileno(DIRFD), $newp, AT_SYMLINK_FOLLOW) or die "$!";
if ($oldp eq '-') {
print "linking stdin to '$newp'. You will get ENOENT without root (or CAP_DAC_READ_SEARCH). Even then doesn't work when links=0\n";
$ret = link_stdin( $newp );
} else {
$ret = linkat_follow_dotpaths($oldp, $newp);
}
# either way, you still can't re-link deleted files (tested Linux 3.16 and 4.2).
# print STDERR
die "error: linkat: $!.\n" . ($!{ENOENT} ? "ENOENT is the error you get when trying to re-link a deleted file\n" : '') unless $ret;
# if you want to see exactly what happened, run
# strace -eopen,linkat linkat.pl
Clearly, this is possible -- fsck does it, for example. However, fsck does it with major localized file system mojo and will clearly not be portable, nor executable as an unprivileged user. It's similar to the debugfs comment above.
Writing that flink(2) call would be an interesting exercise. As ijw points out, it would offer some advantages over current practice of temporary file renaming (rename, note, is guaranteed atomic).
Kind of late to the game but I just found http://computer-forensics.sans.org/blog/2009/01/27/recovering-open-but-unlinked-file-data which may answer the question. I haven't tested it, though, so YMMV. It looks sound.

Resources