Linux iNotify one shot and event mask problem - linux

I'm trying to use iNotify in linux rhel5, kernel 2.6.18, glibc 2.5-18. I did not define the event as one shot but for some some reason it behaves as if I did. The impact is that I have to re-add a watch after each event. Any one ever used iNotify? Another problem is that the mask returned in the event object contains only one flag: IN_ONE_SHOT.

Write the smallest example you can and test that. If it demonstrates the behaviour you are talking about then add it to your question. If it behaves normally then add a little more of your code and test again. Keep repeating until you have reproduced the error or you have your code working. Often I find that building a toy program tells me exactly what I am doing wrong that I could not see in a larger program.

It is probable that inotify is implicitly deleting the watch because the file is being deleted. The behaviour is subtly referred to by the manual page (see the section on the IN_IGNORED event). You can check if this is happening by checking if the flag IN_IGNORED is set in the inotify_event populated by your call to read.
See also inotify delete_self when modifying and saving a file for why the file may be deleted without your knowledge or action during what you think is just a modification.

Related

RPG program error: Error MCH3601 was detected in file

We have been facing a very strange issue with one of our RPGLE programs that bombs intermittently with the subjected error.
This happens specifically at a line where a write operation is performed to a subfile record format. I have debugged and checked all the values assigned to variables during runtime and could not find absolutely no issues. As per the https://www.ibm.com/support/pages/node/644069 IBM page, I can only assume that this might be related to the parameter definitions of the programs called within the RPG. But I have checked the parameters of each and every prototyped program call and everything seems to be in sync.
Can some one please guide on the direction to go to find out the root cause of this problem?
But I have checked the parameters of each and every prototyped program
call
Assuming you're using prototypes properly, ie. there is one prototype defined in a separate source member and it is /INCLUDE into BOTH the caller and the callee...
Then prototype calls aren't the problem, as long as you're properly handling any *OMIT and *NOPASS parameters.
Look at any old style CALL or CALLB calls and anyplace you're not using prototypes properly...meaning there's a explicit PR coded in both caller & callee.
Note that you it's not just old-style calls made by the program that bombs, it's calls made anywhere down the call chain.
And if the program is repeatedly called with LR=*OFF or without reclaiming resources, then it could be any old style calls up the call chain also.
Lastly, old style calls include any made by CL or CLLE programs.
Good luck!

Monitoring Process Syscalls in Live Environment

I've been working on a project for a little while, and the first step is building a library of syscall traces for processes. Essentially, what I'm trying to do is have system wherein every time a process requests an OS service via a syscall, relevant information (calling process, time, syscall name) of the event get logged to a file.
Theoretically, this sounds like a simple enough thing to do, however, implementing such is becoming more of a pain as time goes on. I suppose the main that's causing issues for me is a general lack of knowing where to start implementation.
Initially, I thought that this could all be handled be adding a few lines of code to the kernel entry point, but after digging through entry_64.S for a little while, I came to the conclusion that there must be an easier way. The next idea I had was to overwrite all the services pointed to by sys_call_table with my own service that did logging then called the original service. But, turns out, there are some difficulties to this method with linux kernel 5.4.18 due to sys_call_table no longer being exported. And, even when recompiling the kernel so that sys_call_table is exported, the table is in a memory protected location. Lastly, I've been experimenting with auditd. Specifically, I followed this link but it doesn't seem to be working (when I executed kill command there was is only a corresponding result in ausearch about 50% of time based on timestamps).
I'm getting a little burned out by all these dead-ends, and am really hoping to finally have this first stage in my project up and running. Does anyone have any pointers as to what I should try?
Solution: BPFTrace was exactly what I was looking for.
I used BPFTrace to log every time the kernel began execution of a syscall (excluding those initiated by BPFTrace itself)

Is there a supported way to obtain LDT entries of debuggee?

A userspace process can call modify_ldt(2) to alter entries of its LDT. A debugger, to make correct analysis of what the process reads and where, as well as what code it executes currently, needs to know what descriptor a value of e.g. CS=0x7 selects.
Currently the only way I think could possibly work is injecting some code into debuggee to retrieve the LDT, executing it and then returning back to original state. But this is quite error-prone and is likely to break the debugger user's workflow e.g. when signals arrive.
So is there a better way? I've googled something like PTRACE_LDT, but the pages are from 2005, and grepping modern linux source doesn't find anything relevant to x86 other than comments.

Shake: Signal whether anything had to be rebuilt at all

I use shake to build a bunch of static webpages, which I then have to upload to a remote host, using sftp. Currently, the cronjob runs
git pull # get possibly updated sources
./my-shake-system
lftp ... # upload
I’d like to avoid running the final command if shake did not actually rebuild anything. Is there a way to tell shake “Run command foo, after everything else, and only if you changed something!”?
Or alternatively, have shake report whether it did something in the process exit code?
I guess I can add a rule that depends on all possibly generated file, but that seems to be redundant and error prone.
Currently there is no direct/simple way to determine if anything built. It's also not such a useful concept as for simpler build systems, as certain rules (especially those that define storedValue to return Nothing) will always "rerun", but then very quickly decide they don't need to run the rules that depend on them. To Shake, that is the same as rerunning. I can think of a few approaches, which one is best probably depends on your situation:
Tag the interesting rules
You could tag each interesting rule (one that produces something that needs uploading) with a function that writes to a specific file. If that specific file exists, then you need to upload. This might work slightly better, as if you do multiple Shake runs, and in the first something changes but the second nothing does, the file will still be present. If it makes sense, use an IORef instead of a file.
Use profiling
Shake has quite advanced profiling. If you pass shakeProfile=["output.json"] it will produce a JSON file detailing what built and when. Runs are indexed by an Int, with 0 for the most recent run, and any runs that built nothing are excluded. If you have one rule that always fires (e.g. write to a dummy file with alwaysRerun) then if anything fired at the same time, it rebuilt.
Watch the .shake.database file size
Shake has a database, stored under shakeFiles. Each uninteresting run it will grow by a fairly small amount (~100 bytes) - but a fixed size given your system. If it changes in size by a greater amount, then it did something interesting.
Of these approaches, tagging the interesting rules is probably the simplest and most direct (although does run the risk of you forgetting to tag something).

How to detect that no one is writing to a file in Linux?

I am wondering, is there a simple way to tell whether another entity has a certain file open for writing? I don't have time to use iNotify continuously to wait for any current writer to finish writing. I need to do an intermittent check.
Thanks.
What exactly are you doing where you "don't have time to use iNotify continuously"? First, you should be using the IN_CLOSE_WRITE flag so that iNotify just make one notification when the file gets closed after being written. Using it continuously makes no sense. Second, if your timing is that critical, I'm thinking writing to a file isn't your ideal solution. Do you control the first writer? Do you have to worry about anything else writing to the file after the first writer closes it?
lsof LiSts Open Files. fuser also works similarly (File USER), by telling you which user is using the file.
See: http://www.refining-linux.org/archives/23/16-Introduction-to-lsof-and-fuser/
Since you seem to be wanting to use a library-style interface, and not system, see ofl-lib.c. (It's really just having removed everything but the main function from the ofl program itself.)
You can't do so easily in the general case, and even if you could, you cannot use the information in a non-racy manner (see caf's comment).
So I'd say, redesign your application so you do not need to know.

Resources