Can TortoiseSVN provide a cross-repository view of user activity? - tortoisesvn

Is there a way I can see my commit history for a given time period across multiple repositories using TortoiseSVN? It would be nice to be able to see this, and it's a little cumbersome to get my complete commit history if I'm working in multiple repositories.

If you're not going to rule out the svn.exe client, you could do:
svn log <path_to_repo> -r1:head -q | find "william_leara" >> c:\my_commits.txt
Do this for every repository, and "my_commits.txt" will contain your commits from every repository. If you don't have an obscene number of repositories, it's not a big deal. Further example:
:: dump my commits
svn log http://<server>/<path1> -r1:head -q | find "william_leara" >> c:\my_commits.txt
svn log http://<server>/<path2> -r1:head -q | find "william_leara" >> c:\my_commits.txt
svn log file:///c:/src/myrepo -r1:head -q | find "william_leara" >> c:\my_commits.txt
. . . I think you get the idea. Of course you can edit the range as necessary, or write a batch file that accepts arguments to specify repository/range/user, whatever.

The only way to have something like cross-repository view is using Settings menu and then Log Caching->Cached Repositories. This allows to get svn repository statistics (actually, related to local usage of the particular repository) - Details and export repository data in the form of file set: [filename].changes.csv, [filename].merges.csv, [filename].paths.csv, [filename].revisions.csv, etc. The latter is the most probable you are interested in. I think it could be processed easily for example by perl to have a commit history for a given period in a form you need.

Related

How to identify/verify full integration points in the perforce history

In an analysis of the history in our perforce depot, I need to identify those changes where a full integration from another branch was performed and also find out the exact change number of the source of that integration. Unfortunately, even though such a full integration is a common operation, I could not find any easy and reliable way to detect it in the history. Please let me know if I missed something.
In any case: via a set of scripts using 'p4 filelog' and matching up revision numbers, I managed to find all candidate changes and their respective integration source change. What I am missing is a means to distinguish full integrations from cherry-picks or partial integrations limited to a subdirectory. For this, the closest I could find is the 'p4 interchanges' command, which does exactly the thing I need, except for the problem that the 'toFile' argument cannot have an '#' revision specification.
I would have hoped that
p4 interchanges //depot/sourcebranch#123400 //depot/targetbranch#123499
would tell me whether any changes were missing in the integration point I found, but it only gives the error 'A revision specification (# or #) cannot be used here.' - which matches the documentation.
Is there any other means to examine integration points in the p4 history to distinguish cherry-picks from full merges?
Use the undoc p4 integ -C changelist command:
p4 integrate -1 -2 -C changelist# -Rlou -Znnn
... The -C
changelist# flag considers only integration history from changelists
at or below the given number, allowing you to ignore credit from
subsequent integrations. ...
Hence:
p4 integ -n -C 123499 //depot/sourcebranch/...#123400 //depot/targetbranch/...
should tell you whether sourcebranch#123400 was fully integrated into targetbranch#123499. If you use -C 123498 in theory the difference in the output will show you which files were integrated.
There are probably some edge cases around deleted files -- e.g. if you integrate a deleted file into a file that's deleted at the head revision, it will report "up to date" regardless of integration history, so I can imagine that a file that was skipped for that reason but then subsequently re-added might give a false positive with the above method. (Or it might not -- I have vague memories of possibly fixing that scenario, but undoc bug fixes don't show up in the relnotes...)
Here's an example where foo#2 was integrated into bar#3:
sams-mbp:test samwise$ p4 filelog ...
//stream/main/test/bar
... #2 change 4 delete on 2019/07/21 by samwise#samwise-dvcs-1517552832 (text) 'delete'
... #1 change 3 branch on 2019/07/21 by samwise#samwise-dvcs-1517552832 (text) 'branch'
... ... branch from //stream/main/test/foo#1
//stream/main/test/foo
... #2 change 6 edit on 2019/07/21 by samwise#samwise-dvcs-1517552832 (text) 'edit'
... #1 change 2 add on 2019/07/21 by samwise#samwise-dvcs-1517552832 (text) 'add'
... ... branch into //stream/main/test/bar#1
sams-mbp:test samwise$ p4 integ -n -C 2 foo#2 bar
//stream/main/test/bar#2 - branch/sync from //stream/main/test/foo#1
sams-mbp:test samwise$ p4 integ -n -C 3 foo#2 bar
foo#2 - all revision(s) already integrated.
With -C 2 (before the branch), we see a replay of the integration as it would have happened as of that point in time. With -C 3 (the changelist of the branch), we see "all revisions integrated" because by that point in time it had already happened.
After continuing to try out all combinations of arguments, I finally found a solution that works for me:
p4 interchanges -b SOURCE_to_TARGET //depot/targetbranch/...#targetchange
will print out all changes on the source branch that are missing on targetbranch#targetchange. It will return only changes older than targetchange. If this list contains no changes older than sourcechange, the targetchange was a full integration.
The command may take considerable time to complete when the returned list is long. Unfortunately, I can't find a way to truncate this search, but I can live with that.
As it seems, some of this functionality was buggy up to server version 2018.2 which might have caused difficulties in my earlier attempts.

sync two vobs file (by clearfsimport) without checking in the updated file

I am using following command to sync B vob files from A vob
clearfsimport -master -follow -nsetevent -comment $2 /vobs/A/xxx/*.h /vobs/B/xxx/
It works fine. But it will check in all the changes automatically. Is there a way to do the same task but leave the update files in a check out status?
I want to update the file for B from A. Build my programme, and then re-cover the branch. So if the updated files is an check out status, I can do unco later. Well with my command before, everything is checked in. I cann't re-cover my branch then.
Thanks.
As VonC said, it's impossible to prevent "clearfsimport" to do the check in. And he suggested to use a label to recover back.
For me, the branch where I did "clearfsimport" is branched from a label.Let's call it LABEL_01. So I guess I can use that label for recovery. Is there an easy way (one command) to recover the files under /vobs/B/xxx/ to label LABEL_01 ? I want to do it in my bash script, so the less/easy the command is, the better.
Thanks.
After having a look at the man page for clearfsimport, no, it isn't possible to prevent the checkins.
I would set a label before the clearfsimport, and modify the config spec for the new version to be created in a branch (similar to this config spec).
That way, "re-cover" the initial branch would be easy: none of the new version would have been created in it.

Layering projects on top of each other with git

Let there be:
There are different repositories repoA, repoB and repoC each respecting the same directory layout principles, which are to be merged onto a third repoM's working directory (the "master" project).
repoM has an atypical setup (--work-dir and --git-dir are sepparate). repo[A-C] are cloned as bare, and they are set as core.bare = false and core.worktree=<--work-dir-of-repoM>.
The requirements:
I need to always have an overview over the history of all files in repoM's work-dir, which could have stemmed from repo[A-C]. With this approach, I lose all that information.
Alternative:
I've been thinking about using git-subtree instead (git version 1.7.11.2, so it's already built-in), leaving repo[A-C] bare, and then
git pull -s subtree, or
git subtree ...
With the subtree pull strategy, I lose the history on a merge conflict (git blame says so).
I've never used subtree before, but from my understanding it's not possible to merge files from repo[A-C] into repoM's work-dir, those files must be put into a subdirectory of repo[A-C]. This is definitely not what I need. Why? Because of the following ...
Problem statement:
You have different git repositories each containing different sets of files, usually configuration files and some shell scripts. You want to put everything in the $HOME (which is <--work-dir-of-repoM>) directory from all those repositories. You should be able to see at all time where each file comes from, edit, commit and push changes to each one's origin. You've guessed it, it something like vundle, but generalized for any kind of configuration of any program, not just vim bundles. If a conflict occures, one should be able to track down which two authors of the same file need to get in touch with each other and make up a deal (if one needs to be made).
This is for an open-source project I'm trying to get a prototype working, so any help is highly appreciated. Also ideas about already existing projects which do this in a similar manner are highly appreciated.
Note: the "master directory" does not necessarily have to be $HOME, I've used it as a possible hint on the kind of problem this could solve.
Why not simply use Git Submodules in your "master project"?

How to let git check for updates on the master server?

I have very poor knowledge about git and would like to ask for help.
I have a linux(-only) application which shall only be "downloaded" (i.e. cloned) with git. On startup, the app shall ask the git "master server" (github) for whether there are updates.
Does git offer a command to check for whether there is an update (without really updating - only checking)? Furthermore, can my app read the return value of that command?
If you do not want to merge, you can just git fetch yourremote/yourbranch, the remote/branch specification usually being origin/master. You could then parse the output of the command to see if new commits are actually present. You can refer to the latest fetched commit as either yourremote/yourbranch or possibly by the symref FETCH_HEAD.
Note: I was reminded that FETCH_HEAD refers to the last branch that was fetched. Hence in general you cannot rely on git fetch yourremote with FETCH_HEAD since the former fetches all tracked branches, thus the latter may not refer to yourbranch. Additionally,
you end up fetching more than strictly necessary.
also refer to Jefromi's answer to view but not actually downloaded changes
the following are not necessarily the most compact formats, just readable examples.
That being said, here are some options for checking for updates of a remote branch, which we will denote with yourremote/yourbranch:
0. Handling errors in the following operations:
0.1 If you attempt to git fetch yourremote, and git gives you an error like
conq: repository does not exist.
that probably means you don't have that remote-string defined. Check your defined remote-strings with git remote --verbose, then git remote add yourremote yourremoteURI as needed.
0.2 If git gives you an error like
fatal: ambiguous argument 'yourremote/yourbranch': unknown revision or path not in the working tree.
that probably means you don't have yourremote/yourbranch locally. I'll leave it to someone more knowledgeable to explain what it means to have something remote locally :-) but will say here only that you should be able to fix that error with
git fetch yourremote
after which you should be able to repeat your desired command successfully. (Provided you have defined git remote yourremote correctly: see previous item.)
1. If you need detailed information, git show yourremote/yourbranch and compare it to the current git show yourbranch
2. If you only want to see the differences, git diff yourbranch yourremote/yourbranch
3. If you prefer to make comparisons on the hash only, compare git rev-parse yourremote/yourbranch to git rev-parse yourbranch
4. If you want to use the log to backtrack what happened, you can do something like git log --pretty=oneline yourremote/yourbranch...yourbranch (note use of three dots).
If you really don't want to actually use bandwidth and fetch new commits, but just check whether there is anything to fetch, you can use:
git fetch --dry-run [remote]
where [remote] defaults to origin. You'll have to parse the output, though, which looks something like this:
From git://git.kernel.org/pub/scm/git/git
2e49dab..7f41b6b master -> origin/master
so it's really much easier to just fetch everything (git fetch [remote]), and then look at the diff/log e.g. between master and [remote]/master.
I'd say git fetch is a potential solution. It only updates the index, not working code. In cases of large commit sets, this would involve a download of compressed files/info, so it may be more than you want, but it is the most useful download you can do.

svn clear projects with all revisions

is there a commend to delete project from svn with all its revisions(total cleanup) ?
cheers
The answer is in the Subversion FAQ:
There are special cases where you
might want to destroy all evidence of
a file or commit. (Perhaps somebody
accidentally committed a confidential
document.) This isn't so easy, because
Subversion is deliberately designed to
never lose information. Revisions are
immutable trees which build upon one
another. Removing a revision from
history would cause a domino effect,
creating chaos in all subsequent
revisions and possibly invalidating
all working copies.
The project has plans, however, to
someday implement an svnadmin
obliterate command which would
accomplish the task of permanently
deleting information. (See issue 516.)
In the meantime, your only recourse is
to svnadmin dump your repository, then
pipe the dumpfile through
svndumpfilter (excluding the bad path)
into an svnadmin load command. See
chapter 5 of the Subversion book for
details about this.
No, I don't believe there is.
If you really need to remove files completely from SVN history, I think the only way to do it would be to do something like dumping the repository, filtering out the files you don't want with svndumpfilter, and then recreate the repository from the dump.
Why do you want to do this?
rm -rf on the repository usually works fine.
this is how to do this on Linux:
/>svnadmin dump /path/to/repos > proj.dump
/>cat proj.dump | svndumpfilter exclude somefolder > cleanproj.dump
/>service svn stop
/>BACKUP /path/to/repos/conf /path/to/repos/hooks (all custom configuration for this repository)
/>DELETE /path/to/repos
/>svnadmin create /path/to/repos
/>RESTORE /path/to/repos/conf /path/to/repos/hooks
/>svnadmin load /path/to/repos < cleanproj.dump
/>service svn start
done
I'm assuming that you are talking about multiple projects under the same repository:
myrepo/
project1/
project2/
If you simply want a project to 'disappear' without screwing with the repository history, you can simply hide this path if you are using an authentication mechanism that utilizes authz. In other words, you are not using 'svn+ssh' to access the repository.
Let's say I already have a group in my authz called 'everyone'. The in my authz I will set something like:
[/project1]:
#everyone =

Resources