I'm using glimpse and I want to exclude searching through some files. I'm using a shared version of glimpse so I can't place a ".glimpse_exclude" file in that directory. I tried putting this file in my own local directory but that didn't work (maybe the answer to my question is more about where I can place this file so that glimpse will find it and use my local version?).
I see that there's a "glimpse -W "a;~b" which can exclude an expression (b, in this case), but I want to exclude a directory, something like:
glimpse -F "~exclude/this/directory/" mysearchwords
The best I have is to pipe this through grep and use grep's exclude functionality:
glimpse mysearchwords | grep -v "exclude/this/directory"
My main issue with this is that it loses glimpse's color coding so is a bit harder to look through the results.
In sum: what's the best way to exclude files for glimpse, without using the .glimpse_exclude file, and/or where can I place that file locally so it will be used when I run searches but will not affect the global glimpse command shared across my network?
In glimpse 3.5, as per http://ftp.icm.edu.pl/packages/glimpse/CHANGES ,
3.0 --> 3.5
- added "-f filename" option to glimpse: it allows you to restrict the
search to only those files whose names appear in "filename".
- fixed the agrep bug where -n was not working with ISO characters.
Related
I am looking for some kind of logic in linux where I can place files with same name in a directory or file system.
For e.g. i create a file abc.txt, so the next time if any process creates abc.txt it should automatically check and make the file named as abc.txt.1 should be created, then next time abc.txt.2 and so on...
Is there a way to achieve this.
Any logic or third party tools are also welcomed.
You ask,
For e.g. i create a file abc.txt, so the next time if any process
creates abc.txt it should automatically check and make the file named
as abc.txt.1 should be created
(emphasis added). To obtain such an effect automatically, for every process, without explicit provision by processes, it would have to be implemented as a feature of the filesystem containing the files. Such filesystems are called versioning filesystems, though typically the details are slightly different from what you describe. Most importantly, however, although such filesystems exist for Linux, none of them are mainstream. To the best of my knowledge, none of the major Linux distributions even offers one as a distribution-supported option.
Although it's a bit dated, see also Linux file versioning?
You might be able to approximate that for many programs via a customized version of the C standard library, but that's not foolproof, and you should not expect it to have universal effect.
It would be an altogether different matter for an individual process to be coded for such behavior. It would need to check for existing files and choose an appropriate name when opening each new file. In doing so, some care needs to be taken to avoid related race conditions, but it can be done. Details would depend on the language in which you are writing.
You can use BASH expression to achieve this. For example if I wanted to make 10 files all with the same name, but having a unique number value I would do the following:
# touch my_file{01..10}.txt
This would create 10 files starting at 01 all the way to 10. This method is also hand for looping over files in a sequence or if your also creating directories.
Now if i am reading you question right your asking that if you move a file or create a file in a directory. you would want the a script to automatically create a new file for you? If that is the case then just use a test and if there is a file move that file and mark it. Me personally I use time stamps to do so.
Logic:
# The [ -f ] tests if the file is present
if [ -f $MY_FILE_NAME ]; then
# If the file is present move the file and give it the PID
# That way the name will always be unique
mv $MY_FILE_NAME $MY_FILE_NAME_$$
mv $MY_NEW_FILE .
else
# Move or make the file here
mv $MY_NEW_FILE .
fi
As you can see the logic is very simple. Hope this helps.
Cheers
I don't know about Your particular use case, but You may try to look at logrotate:
https://wiki.archlinux.org/index.php/Logrotate
I've tried to use cg_annotate to include a dictionary by use --include flag. However, no matter what I typed after --include=, it always shows the manual (indicating that my path is wrong).
for example, I typed ".util" after --include= but it shows the manual:
the sceenview
The official manual says:
-I --include= [default: none] Adds a directory to the list in which to search for files. Multiple -I/--include options can be given to add multiple directories.
There is no 'dictionnary' of directories stored somewhere, you always have
to give the list of directories each time you launch cg_annotate.
So, in your case, the mandatory argument cachegrind-out-file is not provided in your command. This causes cg_annotate to stop and show its usage.
You might possibly use kcachegrind (and --tool=callgrind), as kcachegrind has some support for specifying source directories (if ever that is needed, as normally kcachegrind+callgrind will find automatically the source files).
To add some directories in kcachegrind, you can use the menu entry Settings->Configure Kcachegrind and add directories in the Annotations tab.
I've the Cygwin Packages Library installed om my system (Win7- x64) at location C:\Cygwin64\ .
That directory contains over 185.000 Files ! and its size passed the 5GB this week, Knowing that the packages source directory isn't included .
Now, I want to decrease that size, and of-course I'm going to uninstall some of my packages that I don't need anymore. But first I want to ask about the ability of deleting a specific directory that located in: C:\cygwin64\usr\share
(Please, forgive my ignorant, if my question is silly)
While I was trying to figure out the cause of that large files number, I noticed that, this directory specifically, has over than 90.000 File !!
I don't Know what is that directory used for, but would someone please tell me if I can Delete that folder safely, without affecting on the installed packages? - Thanks :)
I cannot speak for the entirety of the folder, but
awk uses that folder for
include files, which I would miss
delete a column with awk or sed
awk - how to delete first column with field separator
how to remove the first two columns in a file using shell (awk, sed, whatever)
So I got Perforce up and running and made my first cl, but I am running into problems.
How do you use find in such a way that you can use | xargs directly into p4 add? At the time that I made the files, I was not thinking about using version control, so the file names contain spaces, apostrophes, and parentheses that have to be escaped before being passed into p4 add.
How do you list all of the files in the default cl in such a way that they can be passed to xargs? Also, is there a way to revert all of the files in the default cl?
My client is setup correctly, and the files to add are listed correctly in the cl. My client is at /cygdrive/o/somefolder (substituting instead of using actual names). One of the files in my cl is at /cygdrive/o/somefolder/a/b.java. However, when it goes to submit it, it tries to use /cygdrive/o/somefolder\a\b.java. What have I done wrong? Is there some setting somewhere for Windows setups?
Lots in there but for this question,
Also, is there a way to revert all of the files in the default cl?
p4 revert -(a or c) default
This should help, -c will revert all open files, while -a reverts unchanged/dont require integration type files.
If you have a mix and match of files you should considering creating a pending changelist, it assigns a number to it etc, so you can separate work for different efforts to different change lists.
Let's say you're working on a big project with multiple files, directories, and subdirectories. In one of these directories/subdirectories/files, you've defined a method, but now you want to know exactly which files in your entire project have been calling your method. How do you do this?
You mentioned grep so I'll throw this solution out there. A more robust solution would be to implement a version control system as Fibbe suggested.
find . -exec grep 'method_name' {} \; -print 2> /dev/null
The idea is, for each file that is found in the current directory and sub-directories, a grep for 'method_name' is executed on that file. The 2> /dev/null is nice if you don't want to get warned about all of the directories and files you don't have access to.
The most common way to do this is by using your editor. For example emacs can do this if you create a tag index with etags.
Source: http://www.gnu.org/software/emacs/emacs-lisp-intro/html_node/etags.html
The you just types M-. and type the name of the function you want to visit and emacs will take you there.
I don't know what system or which editor you are using but most editors has a simular function.
If you don't use emacs an other good way to keep track of functions, and get a lots of other good features, is to use a versions control system. Like git, it provides really fast search.
If you don't use a version control system you may want to look at a program that is designed just for searching. Like OpenGrok.