SLURM / Sbatch creates many small output files - slurm

I am running a pipeline on a SLURM-cluster, and for some reason a lot of smaller files (between 500 and 2000 bytes in size) named along the lines of slurm-XXXXXX.out (where XXXXXX is a number). I've tried to find out what these files are on the SLURM website, but I can't find any mention of them. I assume they are some sort of in-progress files that the system uses while parsing my pipeline?
If it matters, the pipeline I'm running is using snakemake. I know I've seen these types of files before though, without snakemake, but I they weren't a big problem back then. I'm afraid that clearing the working directory of these files after each step of the workflow will interrupt in-progress steps, so I'm not doing anything with them at the moment.
What are these files, and how can I suppress their output or, alternatively, delete them after their corresponding job is finished? Did I mess up my workflow somehow, and that's why they are created?

You might want to take a look at the sbatch documentation. The files that you are referring to are essentially SLURM logs as explained there:
By default both standard output and standard error are directed to a
file of the name "slurm-%j.out", where the "%j" is replaced with the
job allocation number.
You can change the filename with the --error=<filename pattern> and --output=<filename pattern> command line options. The filename_pattern can have one or more symbols that will be replaced as explained in the documentation. According to the FAQs, you should be able to suppress standard output and standard error by using the following command line options:
sbatch --output=/dev/null --error=/dev/null [...]

Related

Running python3 multiprocessing job with slurm makes lots of core.###### files. What are they?

So I have a python3 job that is being run by slurm. The python job uses lots of multiprocessing, generating about 20 or so threads. The code is far from perfect, uses lots of memory, and occasionally reaches some unexpected data and throws an error. That in itself is not a problem, I don't need every one of the 20 process to complete.
The issue is that sometimes something is causing the program to create files named like core.356729, (the number after the dot changes), and these files are massive! Like GB of data. Eventually I end up with so many that I don't have any disk space left and all my jobs are stopped. I can't tell what they are, their contents are not human readable. Google searches for "core files slurm" or "core.number files" are not giving anything relevant.
The quick and dirty solution would be just to add a process that deletes these files as soon as they appear. But I'd rather understand why they are being created first.
Does anyone know what would create a file of the format "core.######"? Is there a name for this type of file? Is there any way to identify which slurm job created the file?
Those are core dump files used for debugging. They're essentially the contents of memory for the process that crashed. You can disable their creation with ulimit -c 0

How can I configure SLURM at the user level (e.g. with something like a ".slurmrc")?

Is there something like .slurmrc for SLURM that would allow each user to set their own defaults for parameters that they would normally specify on the command line.
For example, I run 95% of my jobs on what I'll call our HighMem partition. Since my routine jobs can easily go over the default of 1GB, I almost always request 10GB of RAM. To make the best use of my time, I would like to put the partition and RAM requests in a configuration file so that I don't have to type them in all the time. So, instead of typing the following:
sbatch --partition=HighMem --mem=10G script.sh
I could just type this:
sbatch script.sh
I tried searching for multiple variations on "SLURM user-level configuration" and it seemed that all SLURM-related hits dealt with slurm.conf (a global-level configuration file).
I even tried creating slurm.conf and .slurmrc in my home directory, just in case that worked, but they didn't have any effect on the partition used.
update 1
Yes, I thought about scontrol, but the only configuration file it deals with is global and most parameters in it aren't even relevant for a normal user.
update 2
My supervisor pointed out the SLURM Perl API to me. The last time I looked at it, it seemed too complicated to me, but this time upon looking at the code for https://github.com/SchedMD/slurm/blob/master/contribs/perlapi/libslurm/perl/t/06-complete.t, it would seem that it wouldn't too be hard to create a script that behaves similar to sbatch that reads in a default configuration file and sets the desired parameters. However, I haven't had any success in setting the 'std_out' to a file name that gets written to.
If your example is representative, defining an alias
alias sbatch='sbatch --partition=HighMem --mem=10G'
could be the easiest way. Alternatively, a Bash function could also be used
sbatch() {
command sbatch --partition=HighMem --mem=10G "$#"
}
Put any of these in your .bash_profile for persistence.

Checking output from qsub : Sungrid cluster

is there way to check the output of the submitted job. The output files are written with quite big delay, so I want to be able to see if they are anything wrong going on.
I saw for PBS there was option -k oe to directly write the qsub output to file in home directory but could not find similar solution for my case.
Based on what I find on Torque it would seem like this is not doable without configuring server:
Check real time output after qsub a job on cluster
PBS, refresh stdout
You may want to consider writing output to separate file from within the program and flush stdout accordingly.

How to ask perforce for a list of changes with both description and files

I want to get a list of changes for a perforce branch like:
p4 -t -L //mydepot/library1/v1.0/...#2017/03/27,#now
That is, a list of all changes this week with description. But I also want a list of the files, as in
files in one changelist:
p4 files #=123456
This seems like it needs script, but anyone know of a perforce method?
If the returned changeset collection is large, will the server be adversely impacted by querying every changeset afterwards?
p4 -Ztag -F "describe -s %change%" changes //mydepot/library1/v1.0/...#2017/03/27,#now | p4 -x - run
The answer to the performance question depends on how many is "large" (how many changes/files are we talking about) and your server hardware.
My guess is that with a "normal" server and "normal" usage you'll be fine but if we're talking about a few billion changes with a few billion files each, yeah, those commands will take a while. If we're more in the hundreds or thousands range, meh.
You asked a bunch of questions in one question, so here's a bunch of answers in one answer.
Use the p4 changes command to get a list of changes of interest.
Use p4 describe -s or p4 files #= to get a list of the files in each change.
Yes, to combine these two sets of data you need a script, but it can be a remarkably short script. You can use the p4 command-line aliases feature to write such a script, or you can use your favorite scripting language. There are lots of examples of such scripts on the web.
The server will end up have to perform multiple commands, but these particular commands are ones that the server does very efficiently. I suspect that it won't be so much that the server will have a problem computing the output, but that you will have a problem reading all that output. In other words, don't run a script that produces more output than you want to read.

Daemon for file watching / reporting in the whole UNIX OS

I have to write a Unix/Linux daemon, which should watch for particular set of files (e.g. *.log) in any of the file directories, across various locations and report it to me. Then I have to read all the newly modified files and then I have to process them and push grepped data into Elasticsearch.
Any suggestion on how this can be achieved?
I tried various Perl modules (e.g. File::ChangeNotify, File::Monitor) but for these I need to specify the directories, which I don't want: I need the list of files to be dynamically generated and I also need the content.
Is there any method that I can call OS system calls for file creation and then read the newly generated/modified file?
Not as easy as it sounds unfortunately. You have hooks to inotify (on some platforms) that let you trigger an event on a particular inode changing.
But for wider scope changing, you're really talking about audit and accounting tracking - this isn't a small topic though - not a lot of people do auditing, and there's a reason for that. It's complicated and very platform specific (even different versions of Linux do it differently). Your favourite search engine should be able to help you find answers relevant to your platform.
It may be simpler to run a scheduled task in cron - but not too frequently, because spinning a filesystem like that is dirty - along with File::Find or similar to just run a search occasionally.

Resources