Cscope unable to create inverted index. Why? - vim

The following command works fine:
$>cscope -b -R
However, the option for inverted index does not work:
$>cscope -b -q -k -R
Input file specified two times.
cscope: cannot create inverted index; ignoring -q option
cscope: removed files ncscope.in.out and ncscope.po.out
I googled this one and found some hits. But could not find any answers or solutions. Any insights are greatly appreciated.

I got it now!!!
As usual, should have read the manual properly :-)
I am using a win32 port of cscope from Google. (hosted at: http://code.google.com/p/cscope-win32/). Here is an excerpt from the 'wiki' tab (http://code.google.com/p/cscope-win32/wiki/UsageNotes?tm=6)
•To use inverted indices (-q option) you need sort utility. I am including one with the cscope archive (here is its source code). The utility can also be found on UnxUtils and http://gnuwin32.sf.net. It should be in your PATH before Windows dir because Windows has its own, incompatible sort utility.
NOTE: I actually needed to put the sort utility even before the c:\windows\system32. (It was not good enough to include it before c:\windows in the PATH).
Having done that, I am happy to say that I was able to create the inverted index.

Related

How to replace double spaces with one space in filenames (also subdirectories) (CloudLinux Server release 6.10)

I want to replace double spaces with one space in the filenames of a lot of photos. These photos are located in directory /foto and it's subfolders. How to do this? For example "photo 1.jpg" needs to become "photo 1.jpg"
The best way is to use commandline, because it's on CloudLinux server. (and it is over 50GB of photos). I searched here on Stackoverflow, also Google to find the command I need. I guess rename is the one to use, or mv.
The only things I found were commands about replacing space and replacing other symbols, but not about double (multiple) spaces.
find -iname \*.* | rename -v "s/\s{2}/ /g"
This is the final command which helped me out. I used perl rename, see answer by Gilles
Use this, using Perl's rename :
rename 's/\s{2}/ /g' files*
Remove -n switch when the output looks good.
There are other tools with the same name which may or may not be able to do this, so be careful.
If you run the following command (GNU)
$ file "$(readlink -f "$(type -p rename)")"
and you have a result that contains Perl script, ASCII text executable and not containing ELF, then this seems to be the right tool =)
If not, to make it the default (usually already the case) on Debian and derivative like Ubuntu :
$ sudo update-alternatives --set rename /path/to/rename
Replace /path/to/rename to the path of your perl rename executable.
If you don't have this command, search your package manager to install it or do it manually (no deps...)
This tool was originally written by Larry Wall, the Perl's dad.

How to list recently deleted files from a directory?

I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4, and to which I would most likely not have access to make modifications.
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Great post about this here.
So a few things:
You may have zero success if your partition is ext2; it works best with ext4
df /
Fill mount point with result from #2, in my case:
sudo debugfs /dev/mapper/q4os--desktop--vg-root
lsdel
q (to exit out of debugfs)
sudo debugfs -R 'ncheck 528754' /dev/sda2 2>/dev/null (replace number with one from step #4)
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.
As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
We can then run the same command again later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
We can then print a simple message if the number of files went down.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Return to step 2.
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.
Try:
lsof -nP | grep -i deleted
history >> history.txt
Look for all rm statements.

Gitbash version does not allow grep -o, is it possible to install new grep package?

I am trying to do a directory-wide search for specific strings in JSON files. The only problem is that these JSON files are only one line, so when I cat all of them, all strings occur a magical "1" time...since there's only one line even when I string them all together.
An easy solution, which I see a lot (here and here), is grep -o. Only problem is it doesn't come standard on my Gitbash. I solved my immediate problem by just installing the latest Cygwin. However, I'm wondering if there was an easier/more granular solution. Is it possible to do the equivalent of "apt-get install" or similar on Gitbash? Or can someone explain to me a quick-and-dirty way to extract and install the tar file in Gitbash?
The other approach is to:
use a cmd session (using the git-cmd.bat which packaged with Git for Windows)
use the grep included Gnu for Windows, which supports the -o option (and actually allow you to use most of the other Unix commands that your script might be currently using)

How to do partial search in Linux with locate?

I prefer to seach with locate command but I don't know how to perform a partial search with it.
Suppose I want to search file containing the word libevent. How can I do that?
Locate searches for file names. Not file contents.
The ugly way is to use grep It'll start searching from / directory.
grep -irn 'libevent' /
The better way is to narrow down the suspected directories where this files could exists. Suppose those directories' full paths are /path/to/dir1, /path/to/dir2 etc. Then invoke the following command.
for dir in /path/to/dir1 /path/to/dir2
do
grep -irn 'libevent' $dir
done
The locate command is not searching inside the content of files like grep (and other commands) do. It is simply searching inside file paths.
locate work by using a cache index of file paths, and this index is often updated by the updatedb utility.
addenda
A useful way to search some pattern inside (the content of) some files is to use the ability of zsh or some recent versions of bash to expand the ** file pattern, like e.g.
grep foo ~/gee/**/*.[ch]
with zsh this search inside all files named *.c or *.h under $HOME/gee/ containing foo. I find this feature tremendously useful, and justifying alone the adoption of zsh as my interactive shell. With other shells you might type the much longer
find $HOME/gee -name '*.ch' | xargs grep foo

Linux shell:Is it possible to speedup finding files using "find" by using a predefined list of files/folders?

I primarily program in Linux, using tcsh shell. By default, my current directory is the root of my code base - I use "find" to locate whichever file I'm interested in modifying, and then once find shows up the location of the file, I can then edit/modify on Vim.
The problem is, due to the size of the code base, every time I ask find to show up the location of a file , it takes at least 4-5 seconds to complete the search, which are too short to be used for anything else !! So, since the rate is new files being added to the code base is very small, i'm looking for a way as follows:
1) Generate the list of all files in my code base
2) Have find look in only those locations/files to answer my query
I've seen how opening up files in cscope is lightning fast, as it stores the list of files previously. I'd like to use the same mechanism for find, just not from within the cscope window, but from the generic cmd line.
Any ideas ?
Install the locate, mlocate, or slocate package from your distribution, and either wait for cron to run the update task :) or run the updatedb command manually via the /etc/cron.daily/mlocate or similar file.
$ time locate kernel.txt
/home/sarnold/Local/linux-2.6/Documentation/sysctl/kernel.txt
/home/sarnold/Local/linux-2.6-config-all/Documentation/sysctl/kernel.txt
/home/sarnold/Local/linux-apparmor/Documentation/sysctl/kernel.txt
/usr/share/doc/libfuse2/kernel.txt.gz
real 0m0.595s
Yes. See slocate (or updatedb & locate).
The -U flag is particularily interesting because you can just index the directory that contains your code (and thus, updating or creating the database will be quick).
You could write a list of directories to a file and use them in your find command:
$ find /path/to/src -type d > dirs
$ find $(cat dirs) -type f -name "foo"
Alternatively, write a list of files to a file and use grep on it. The list of files is more likely to change than the list of dirs though.
$ find /path/to/src -type f > files
$ vi $(grep foo files)
find in conjunction with xargs (substituting -exec) does differ significantly in execution timings:
http://forrestrunning.wordpress.com/2011/08/01/find-exec-xargs/

Resources