is there a commend to delete project from svn with all its revisions(total cleanup) ?
cheers
The answer is in the Subversion FAQ:
There are special cases where you
might want to destroy all evidence of
a file or commit. (Perhaps somebody
accidentally committed a confidential
document.) This isn't so easy, because
Subversion is deliberately designed to
never lose information. Revisions are
immutable trees which build upon one
another. Removing a revision from
history would cause a domino effect,
creating chaos in all subsequent
revisions and possibly invalidating
all working copies.
The project has plans, however, to
someday implement an svnadmin
obliterate command which would
accomplish the task of permanently
deleting information. (See issue 516.)
In the meantime, your only recourse is
to svnadmin dump your repository, then
pipe the dumpfile through
svndumpfilter (excluding the bad path)
into an svnadmin load command. See
chapter 5 of the Subversion book for
details about this.
No, I don't believe there is.
If you really need to remove files completely from SVN history, I think the only way to do it would be to do something like dumping the repository, filtering out the files you don't want with svndumpfilter, and then recreate the repository from the dump.
Why do you want to do this?
rm -rf on the repository usually works fine.
this is how to do this on Linux:
/>svnadmin dump /path/to/repos > proj.dump
/>cat proj.dump | svndumpfilter exclude somefolder > cleanproj.dump
/>service svn stop
/>BACKUP /path/to/repos/conf /path/to/repos/hooks (all custom configuration for this repository)
/>DELETE /path/to/repos
/>svnadmin create /path/to/repos
/>RESTORE /path/to/repos/conf /path/to/repos/hooks
/>svnadmin load /path/to/repos < cleanproj.dump
/>service svn start
done
I'm assuming that you are talking about multiple projects under the same repository:
myrepo/
project1/
project2/
If you simply want a project to 'disappear' without screwing with the repository history, you can simply hide this path if you are using an authentication mechanism that utilizes authz. In other words, you are not using 'svn+ssh' to access the repository.
Let's say I already have a group in my authz called 'everyone'. The in my authz I will set something like:
[/project1]:
#everyone =
Related
The part I'm working on is kernel-devsrc, which is in the recipe recipes-kernel.
I want to change one of the source .c files in drivers/usb/serial in kernal-devsrc. From some of the online materials, I need to:
Have my own layer
In the layer, need a recipe with the same name as recipes-kernel (and further more, recipes-kernel/linux)
Add the .bbappend file and patch file.
The problem is: to create a patch file I need to know the 2 git SHAs of before and after the change, but I don't have access to the third party recipes-kernel, how do I get the SHA??
OR, if that is the wrong way to do this, could you point out the right way to do it? Thanks!
NOTE: This is problem is not like this one: How patching works in yocto, which the author has access to the source code (.c and .h files). I DON"T have access to the source code, the yotco kernel I'm working on is from a public git repo, and I am not able to git commit to get the SHA, which is necessary to create the patch file.
So, the way I do it is to use Quilt, follow the steps there then good to go:
https://www.yoctoproject.org/docs/1.8/dev-manual/dev-manual.html#using-a-quilt-workflow
I don't need to know the SHA (though I still don't know why others in my organization end up writing SHAs in the patch files and how did they know the SHAs).
The power of Yocto is precisely that it makes it relatively straightforward to patch any existing recipes, without requiring write access to the upstream project source code or Yocto layer.
As a pre-requisite, the project needs to have its own layer to track the patches. Then, the easiest way it to use devtool. The general idea is to:
Create a local sandbox to patch the project: devtool modify RECIPE_NAME (use the name of the target recipe here). This command will create a temporary workspace and print the path to this workspace.
Move to the temporary workspace, apply the needed patches and commit them one by one.
Once all the desired patches have been applied, use devtool finish RECIPE_NAME CUSTOM_LAYER_NAME to save the chances as clean patch files in a bbappend in the custom layer.
Under the hood, devtool modify initializes a (writable) git repository in the sandbox. When devtool finish is invoked, devtool checks the list of extra-commits and saves them as patch files in a .bbappend in the target layer.
I have SVN with many tags.
Due to capacity issues, I would like to create a new SVN with just 10 latest tags.
I know manually check out the latest 10 tags and commit them to the newly created SVN.
But isn't there an easier way than this?
Tags are in separate tree in SVN, thus - your way you'll lost all intermediate changes between tags (because they happened in trunk|branch)
More bulletproof and safe way will be "Truncate repository history, starting from 10-th historical tag from HEAD"
In this case you have:
Find revision on starting point of dump
Dump repo-range in dump-file with svnadmin dump (use svnadmin dump ... -r FROM:HEAD format)
Restore dump from the above into the new repo, using svnadmin load
I am using following command to sync B vob files from A vob
clearfsimport -master -follow -nsetevent -comment $2 /vobs/A/xxx/*.h /vobs/B/xxx/
It works fine. But it will check in all the changes automatically. Is there a way to do the same task but leave the update files in a check out status?
I want to update the file for B from A. Build my programme, and then re-cover the branch. So if the updated files is an check out status, I can do unco later. Well with my command before, everything is checked in. I cann't re-cover my branch then.
Thanks.
As VonC said, it's impossible to prevent "clearfsimport" to do the check in. And he suggested to use a label to recover back.
For me, the branch where I did "clearfsimport" is branched from a label.Let's call it LABEL_01. So I guess I can use that label for recovery. Is there an easy way (one command) to recover the files under /vobs/B/xxx/ to label LABEL_01 ? I want to do it in my bash script, so the less/easy the command is, the better.
Thanks.
After having a look at the man page for clearfsimport, no, it isn't possible to prevent the checkins.
I would set a label before the clearfsimport, and modify the config spec for the new version to be created in a branch (similar to this config spec).
That way, "re-cover" the initial branch would be easy: none of the new version would have been created in it.
Let there be:
There are different repositories repoA, repoB and repoC each respecting the same directory layout principles, which are to be merged onto a third repoM's working directory (the "master" project).
repoM has an atypical setup (--work-dir and --git-dir are sepparate). repo[A-C] are cloned as bare, and they are set as core.bare = false and core.worktree=<--work-dir-of-repoM>.
The requirements:
I need to always have an overview over the history of all files in repoM's work-dir, which could have stemmed from repo[A-C]. With this approach, I lose all that information.
Alternative:
I've been thinking about using git-subtree instead (git version 1.7.11.2, so it's already built-in), leaving repo[A-C] bare, and then
git pull -s subtree, or
git subtree ...
With the subtree pull strategy, I lose the history on a merge conflict (git blame says so).
I've never used subtree before, but from my understanding it's not possible to merge files from repo[A-C] into repoM's work-dir, those files must be put into a subdirectory of repo[A-C]. This is definitely not what I need. Why? Because of the following ...
Problem statement:
You have different git repositories each containing different sets of files, usually configuration files and some shell scripts. You want to put everything in the $HOME (which is <--work-dir-of-repoM>) directory from all those repositories. You should be able to see at all time where each file comes from, edit, commit and push changes to each one's origin. You've guessed it, it something like vundle, but generalized for any kind of configuration of any program, not just vim bundles. If a conflict occures, one should be able to track down which two authors of the same file need to get in touch with each other and make up a deal (if one needs to be made).
This is for an open-source project I'm trying to get a prototype working, so any help is highly appreciated. Also ideas about already existing projects which do this in a similar manner are highly appreciated.
Note: the "master directory" does not necessarily have to be $HOME, I've used it as a possible hint on the kind of problem this could solve.
Why not simply use Git Submodules in your "master project"?
Is there a way I can see my commit history for a given time period across multiple repositories using TortoiseSVN? It would be nice to be able to see this, and it's a little cumbersome to get my complete commit history if I'm working in multiple repositories.
If you're not going to rule out the svn.exe client, you could do:
svn log <path_to_repo> -r1:head -q | find "william_leara" >> c:\my_commits.txt
Do this for every repository, and "my_commits.txt" will contain your commits from every repository. If you don't have an obscene number of repositories, it's not a big deal. Further example:
:: dump my commits
svn log http://<server>/<path1> -r1:head -q | find "william_leara" >> c:\my_commits.txt
svn log http://<server>/<path2> -r1:head -q | find "william_leara" >> c:\my_commits.txt
svn log file:///c:/src/myrepo -r1:head -q | find "william_leara" >> c:\my_commits.txt
. . . I think you get the idea. Of course you can edit the range as necessary, or write a batch file that accepts arguments to specify repository/range/user, whatever.
The only way to have something like cross-repository view is using Settings menu and then Log Caching->Cached Repositories. This allows to get svn repository statistics (actually, related to local usage of the particular repository) - Details and export repository data in the form of file set: [filename].changes.csv, [filename].merges.csv, [filename].paths.csv, [filename].revisions.csv, etc. The latter is the most probable you are interested in. I think it could be processed easily for example by perl to have a commit history for a given period in a form you need.