Is there a way to debug cscope? - linux

I'm trying to build a cscope code database (running cscope -Rb) on a reasonably-sized (not huge) codebase.
However, cscope is not returning. htop reports that cscope is using 100% of one CPU.
Over the past few months that I've been using it, cscope has completed creating the database (with previous iterations of the same codebase) in maybe as long as a minute.
I assume there's a bad code file somewhere that is causing the problem, but how do I determine which one it is if cscope just puts itself into what seems to be an infinite loop?

Related

Timestamps in the VS Code Terminal

For performance benchmarking, I'd like to see Node execution times at the line level in my terminal output (as opposed to the total execution time). It would be helpful to know if my code changes are increasing speed.
Using VS Code. Any recommended extensions?

Running python3 multiprocessing job with slurm makes lots of core.###### files. What are they?

So I have a python3 job that is being run by slurm. The python job uses lots of multiprocessing, generating about 20 or so threads. The code is far from perfect, uses lots of memory, and occasionally reaches some unexpected data and throws an error. That in itself is not a problem, I don't need every one of the 20 process to complete.
The issue is that sometimes something is causing the program to create files named like core.356729, (the number after the dot changes), and these files are massive! Like GB of data. Eventually I end up with so many that I don't have any disk space left and all my jobs are stopped. I can't tell what they are, their contents are not human readable. Google searches for "core files slurm" or "core.number files" are not giving anything relevant.
The quick and dirty solution would be just to add a process that deletes these files as soon as they appear. But I'd rather understand why they are being created first.
Does anyone know what would create a file of the format "core.######"? Is there a name for this type of file? Is there any way to identify which slurm job created the file?
Those are core dump files used for debugging. They're essentially the contents of memory for the process that crashed. You can disable their creation with ulimit -c 0

Daemon for file watching / reporting in the whole UNIX OS

I have to write a Unix/Linux daemon, which should watch for particular set of files (e.g. *.log) in any of the file directories, across various locations and report it to me. Then I have to read all the newly modified files and then I have to process them and push grepped data into Elasticsearch.
Any suggestion on how this can be achieved?
I tried various Perl modules (e.g. File::ChangeNotify, File::Monitor) but for these I need to specify the directories, which I don't want: I need the list of files to be dynamically generated and I also need the content.
Is there any method that I can call OS system calls for file creation and then read the newly generated/modified file?
Not as easy as it sounds unfortunately. You have hooks to inotify (on some platforms) that let you trigger an event on a particular inode changing.
But for wider scope changing, you're really talking about audit and accounting tracking - this isn't a small topic though - not a lot of people do auditing, and there's a reason for that. It's complicated and very platform specific (even different versions of Linux do it differently). Your favourite search engine should be able to help you find answers relevant to your platform.
It may be simpler to run a scheduled task in cron - but not too frequently, because spinning a filesystem like that is dirty - along with File::Find or similar to just run a search occasionally.

How to get an ETA?

I am build several large set of source files (targets) using scons. Now, I would like to know if there is a metric I can use to show me:
How many targets remain to be build.
How long it will take -- this to be honest, this is probably a no-go as it is really hard to tell!
How can I do that in scons?
There is currently no progress indicator built into SCons, and it's also not trivial to provide one. The problem is, that SCons doesn't build the complete DAG first, and then starts the build...such that you'd have a total number of targets to visit that you could use as a reference (=100%).
Instead, it makes up the DAG on the go... It looks at each target, and then expands the list of its children (sources and implicit dependencies like headers) to check whether they are up-to-date. If a child has changed, it gets rebuilt by applying the same "build step" recursively.
In this way, SCons crawls itself from the list of targets, as given on the command-line (with the "." dir being the default), down the DAG...where only the parts are ever visited, that are required for (or, in other words: have a dependency to) the requested targets.
This makes it possible for SCons to handle things like "header files, generated by a program that must be compiled first" in the first go...but it also means that the total number of targets/children to get visited changes constantly.
So, a standard progress indicator would continuously climb towards the 80%-90%, just to then fall back to 50%...and I don't think this would give you the information you're really after.
Tip: If your builds are large and you don't want to wait, do incremental builds and only build the library/program you're currently doing work on ("scons lib1"). This will still take into account all dependencies, but only a fraction of the DAG has to get expanded. So, you use less memory and get faster update times...especially if you use the "interactive" mode. In a project with a 100000 C files total, the update of a single library with 500 C files takes about 1s on my machine. For more infos on this topic check out http://scons.org/wiki/WhySconsIsNotSlow .

Faster multi-file keyword completion in Vim?

While searching for my python completion nirvana in vim, I have come to really love <C-x> <C-i>: "keywords in the current and included files". This almost always gets me a long nasty name from another module completed, which is great.
(Omni-completion is obviously better when it works, but too often it reports it can't find any matches. Ok, Python isn't Java, I get it)
The only problem with this multi-file completion is it's very slow: on my netbook, a file with a reasonable set of imports can take up to 4 or 5 seconds to parse every time I hit <C-x> <C-i>. It seems to load every imported file every time I hit <C-x> <C-i>. Is there any way to cache the files or speed up this process? Would using tag completion be faster?
It is quite possible that this process takes some time if you're working on projects with multiple source files (vim needs to parse all included source files to find more included source files and to build the word list.) You could use tag-completion, which uses the output of ctags to do almost the same, but you'd need to run a few tests to tell the speed difference.
I personally use complete completion (<C-P> or <C-N> in insert mode.) By default, it matches all words in all buffers (even buffers that have been unloaded, i.e. files that have been closed), but is really fast. I found that the completion works quite accurately nonetheless, even if you activate it after 2-3 characters.

Resources