Perl File::NFSLock fails to acquire lock with error EACCESS - multithreading

I have a file on an NFS mount which is locked and unlocked for every operation done on it.
Initially I used flock(filehandle, LOCK_EX|LOCK_NB) but that attempt failed with I/O Error.
On browsing through multiple forums I found a solution to use File::NFSLock to lock/unlock files on a NFS share. However, NFSLock succeeds on the first attempt to lock and unlock the file, however when 2 processes simultaneously tries to access the locks, it fails with EACESS error.
Here is how the code is:
Process 1 acquired lock with the below code and also released the lock once done:
if ($lock = new File::NFSLock {
file => $handle,
lock_type => LOCK_EX|LOCK_NB,
}) {
Now 2 processes were spawned with a fork command (Process 2 and Process 3). When Process 2 tried to lock the same file using same above code in Step 1, file locking failed with error Permission Denied. After 15 seconds when Process 3 tried to acquire lock it also failed with Permission Denied error.
When I executed fuser -v on the file, I could see 2 process IDs (Process 2 and Process 3) which had opened the file for writing.
Few documents on internet suggest that The lock cannot be set because it is blocked by an existing lock on the file. Some systems use EAGAIN in this case, and other systems use EACCES however doesn't seem to be the case in my scenario as Process 1 had released the lock.
Also please note that all the 3 processes were spawned from the same script, so there shouldn't be an issue of different user trying to acquire the lock.
Thank You!

File::NFSLock expects a file path —not a file handle— for its file parameter. Replacing the file handle with a file path solved the issue.
if ($lock = new File::NFSLock {
file => $path, # Previously, I used file handle here.
lock_type => LOCK_EX|LOCK_NB,
}) {

Related

Open File Description Locks confusion

As in - https://www.gnu.org/software/libc/manual/html_node/Open-File-Description-Locks.html#Open-File-Description-Locks
fcntl(F_OFD_SETLK) locks on an open file table entry, (usually obtained by open()). Easy to understand.
But in the following example :
https://www.gnu.org/software/libc/manual/html_node/Open-File-Description-Locks-Example.html#Open-File-Description-Locks-Example.
In its example process, each thread calls open(), so each file descriptor should point to a different open file table entry.
Then doing fcntl (fd, F_OFD_SETLKW, &lck) in each thread is just getting a lock on a different open file table entry, which means this locking is completely wrong.
But I tested on Ubuntu, and it works for some reason. What am I missing?

how to synchronously close a file in linux kernel cifsfs module, or in vfs modules in general

I am trying to customize cifsfs module for my own use case, one of the case I need to open a file and read its content in the cifs_unlink() call, I am using filp_open() to open it, read it then filp_close() it, then continue with the normal code path of unlink(), I could get the content just fine.
Problem is, the filp_close() call will hold off the smb request until I am out of the unlink() function, so the unlink smb request will arrive earlier than the close(), and it triggers 0xc0000043 NT_STATUS_SHARING_VIOLATION, then the unlink will fail with device busy.
tried to call cifs_close() instead of the filp_close(), same result.
Wondering how I can synchronously close() a file to avoid this failure.

Node JS - No such file?

I'm using this piece of code to delete a file on demand
{
...
fs.access(path, (err)=> err || fs.unlink(path));
...
}
I got this error
Error: ENOENT: no such file or directory, unlink 'C:\ ... '
at Error (native)
Which makes no sense to me as I literally just checked for the files existence before attempting the unlink - I have a feeling something weird's going on behind the scenes like file locking.
How do I rectify this error?
Also, do I need to lock the file myself before I attempt to delete, to guarantee a robust and safe delete. I won't be there to manually delete the file and restart the server every time a user tries to delete their file.
calling fs.access before write or delete is not recommended. Please check the below link
https://nodejs.org/api/fs.html#fs_fs_access_path_mode_callback
Using fs.access() to check for the accessibility of a file before calling fs.open(), fs.readFile() or fs.writeFile() is not recommended. Doing so introduces a race condition, since other processes may change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file is not accessible.

How to implement a semaphore that will synchronize several different copies of the same program in Linux

I have a program that can be ran several times. The program uses a working directory where it saves/manipulates its runtime files and puts results. I want to make sure that if several copies of the program run simultaneously, they won't use the same folder. To do this I add a hidden file in the work directory when it's created, that means that the directory is being used, and delete it when the program exits. When a program wants to use a certain directory as its working directory, it'll check if that file exists, and if not it will use the directory, otherwise, it'll use a directory of the same name with its process id attached. The implementation is: (in Tcl)
upon starting:
if [file exists [db_work_area]/.folder_used] {
reg set work_area_override [db_work_area]_[pid]
}
...
exec touch ${db_wa}/.folder_used
when exiting:
if [file exists [db_work_area]/.folder_used] {
file delete [db_work_area]/.folder_used
}
This works when the copies of the program are opened one at a time, however I am afraid that if several copies of the program will be opened at the same time, there will be a problem with their synchronization. Meaning that two programs will check if the file exists, together, see that it doesn't both chose that directory, and only after that, they will add the file. How can I implement a semaphore that will be able to synchronize between the several different copies of the same program running?
You should not do a [file exists] and later the touch, it works better to use open to do it in a single step with the EXCL permission.
Try to use something like this to create the file and fail if it already exists in an atomic way.
if {[catch {open ${db_wa}/.folder_used {WRONLY EXCL CREAT}} fd]} {
# error happend, file exists
# pick a different area
} else {
# just close it again, like a touch to create the file
close $fd
}

VC++ - Asynchronous Thread

I am working on VC++ project, in that my application process a file from input path and generates 3 output "*.DAT" files in the destination path. I will FTP these DAT file to the destination server. After FTP, I need to delete only two output .DAT files the folder. I am able to delete those files, because there one Asynchronous thread running behind the process. Since the thread is running, while deleting it says, "Cannot delete, the file is used by another person".
I need to stop that thread and delete the files. Multiple files can also be taken from the input path to process.
Please help me in resolving this issue. Its very high priority issue for me. Please help me ASAP.
I don't think this is a threading issue. Instead I think your problem is that Windows won't let you delete a file that still has open handles referencing it. Make sure that you call CloseHandle on handles to the file that you want to delete first. Also ensure that whatever mechanism you are using to perform the FTP transfer doesn't have any handles open to the file you want to delete.
I don't think that forcing the background thread down will solve your problem. You can't delete the files because you're holding an open handle to those files. You must close the handle first. Create an event object and share it between your main thread and the background thread. When the background thread is done sending the files through FTP, it should set this event. Have your main thread wait on the event before deleting the files.
Background Thread:
SendFiles();
ReleaseResources(); // (might be necessary, depending on your design)
SetEvent( hFilesSentEvent );
Main Thread:
WaitForSingleObject( hFilesSentEvent );
DeleteFiles();

Resources