Say a file is known to P4 as Foo.cpp
p4 files -e -m 100 //XXX/YYY/...foo.cpp*
(note the lower case f) can't find it. Has anyone overcome this issue?
EDIT: the place I am running the command from does not have a local p4 checkout so going through the file system is not an option here.
If you are able, I would highly recommend going through the trouble (thanks to Linux-dependency hell) of installing P4Search. It will give you the case-insensitive searching power you seek, but it will also save you hours and hours of time down the road.
http://www.perforce.com/perforce/r14.1/user/p4searchnotes.txt
You could use the Perforce Broker to rewrite the command, but this would only be practical if all of your files started with an upper case letter.
Some examples of using the broker to rewrite commands are here:
http://www.perforce.com/blog/120727/customising-perforce-using-p4broker-rewrite-command
The simplest long term solution may be to move to a case insensitive server, but note that this is not a trivial process.
If this is something you want to do, I strongly recommend you contact 'support#perforce.com' for advice and more information.
You might also find this KB article helpful:
http://answers.perforce.com/articles/KB/3081
Hope this helps,
Jen.
Related
Is there a performance difference between find and find2perl? I work for a hosting company and I was told that our admins prefer us to use find2perl over find. Supposedly this is because find is heavier on resource usage than find2perl? Does anyone know if this is true and if so, could you please explain why?
find2perl translates the find into a perl script supposedly for later use (in perl). The man page states that is "typically faster than running find itself" and I guess that is why your sysadmins tell you to use it, or maybe they are simply perl fans.
I used to use a program finddupe on Windows (XP) which checked for duplicate files and offered to replace by hardlinks.
This calculated a hash of the 1st 32K, only checking the balance on match. I have the source (for VC++6), but was wondering if there is a Linux/OSX equivalent before I try to port it, although I suspect it may be better to write a new program in a higher level language.
I've found fdupes to be helpful for me.
If you are looking to write your own quick script, I would suggest looping over files and using cmp as it allows you to easily stop comparison after the first mismatched byte.
There are many similar tools. See here
They may not be part of standard distribution.
I have used fslint before and found it to be sufficient for my needs.
I have many files including code. I need to search function as well as different keywords. Currently I am using "grep -i -r 'keyword to search'" in linux to search. However, I need to increase searching speed to reduce time.
I have also heard about Boyer-Moore algorithm which is a fast searching one. But the result I have obtained isn't my expectation.
So I am very looking forward to hearing your comments and solutions
You should try ack (homepage). It is designed as fast search tool for code.
If you want to have the ability to search your code with taking language syntax into account (distinguishing between functions and symbols) you should try Exuberant ctags.
There are several tools that are prospectively worthwhile, relative to the popular Ack. The first one that came to mind for me was The Silver Searcher.
In my initial days of using linux I usually had to search google to know the command for
doing a particular task. Once I have the command name, i can view its usage using man command-name.
Similarly I was thinking of some utility which can tell the command to do a particular task if the task to be done is specified as an argument and opens the man page for that command
e.g:
findUtilty "find all files in a directory"
output:
ls
find
I want to know if some utility of that kind exists, if so it will be very handy especially for newbies.
If not then i may like to implement it.
thanx,
Not as nice as you are asking about, but
apropos <keyword>
and
man -k <keyword>
can be very useful.
Parsing natural language is hard because there are thousands of ways to rephrase one sentence. Google does it best as far as I know. So, there is no such tool. There are handy and practical manuals that makes it easy to find the right tool for the job. Also, there is a huge community behind core-utils (and linux in general), so try both forums and IRC. Often, the latter is the fastest. And people tend to parse natural language as expected :)
apropos will do something like you suggest.
I guess it is: List of Unix utilities # Wikipedia
on Debian (and presumably derived systems) this is also useful:
sudo apt-cache search <keyword>
Few years ago NetBSD decided to rewrite its apropos. The new implementation does a full text search with results ranked in order of relevance. It comes close to what you have asked. See the output here
https://man-k.org/search?q=find+all+files+in+directory
I have a patch which I'd like to split into two patches. I need to split the patch with per-line granularity -- I can't just split the hunks up into two separate files.
I could use Emacs diff mode, but I'm a Vim user, and I don't want to learn Emacs. I'm managing this patch in Mercurial Queues, and I've been using the crecord plugin, but it's pretty cumbersome for large patches, and the UI is really slow.
Ideally, I'd like to use Vim for editing my patch, but I haven't been able to find a suitable plugin. Otherwise, anything other than Emacs which is better than crecord would be helpful to me. Does such a thing exist?
There seem to be two plausibly acceptable answers:
Is there some reason vimdiff isn't good enough? You could edit a copy of original and a patched copy, moving the changes you want onto the original, save it, diff it against the true original to create the first patch, and diff the fully patched version against it to create the second patch. – Jefromi 2009-08-25 20:49.
I've been using VCSVimdiff with Mercurial for a long time, it works very nicely. – tonfa 2009-12-24 13:55
If someone up-votes this, it will move the question off the unanswered list; it's Community Wiki so there's no benefit to me.
If you add the Mercurial tag to this question, it might get seen by some Mercurial experts around...
Ideal tool for manually splitting patches for me is git add -i.
You can also try filterdiff, but it depends if it allows manipulations you want to do.