TLDR; My version of the SVN-repo differs from my teamates. Even though the repo-UUID is identical and we are using the same branch (exactly the same repo-links).
My SVN was working fine before i went home for vacation.
I came back, updated my repo, commit some changes. Everything seemed to work fine. But it turns out that my team cannot see my commits and when updating - my commit seems to be the latest one. Looking through their clients, the revision-numbers i have checked in collide with other commits.
Im using Ubuntu 18.04.1 LTS -
svn, version 1.9.7 (r1800392)
I tried removing the repository and checking it out (removing the .svn dir as well) - and when i do, my latest commit is HEAD. (Instead of the real HEAD which has a revision number way higher).
When i browse the repo from the web-browser, my commit is HEAD.
When they browse the very same link - their commit is HEAD.
Restarting computer does nothing. Reinstalling subversion (removing ~/.subversion) did nothing.
We tried checking out the repo using different user - still my commit is HEAD.
If i use my account on different machine, it works fine.
The repo-UUID is the same for me and my colleagues.
Im thinking this might be some kind of cache issue, but what cache is both used for svn and browser?
Additional thoughts:
I am also using a VPN to access the corporate network. Could there be a cache there? But the SVN-traffic is using TLS, what could possibly cache TLS-data?
#Simion pointed out that it might be a good idea to make sure the hostname of the Repo resolves in the same IP.
Turns out that was the problem. The CM-Center had moved the repo to another server (and for unknown reasons kept a copy of the repo on the old server - which caused all this confusion) and the Infra department had changed the IP of the repo hostname.
Flusing my local DNS-cache fixed the problem!
Related
I have a Cloudflare page that uses python-sphinxto build docs. For some of my commits, it downloads a different version of sphinx than others and fails to build docs correctly.
What I tried:
Adding a dummy commit on top of a failing build seems to fix an issue and force Cloudflare builder to download the correct sphinx version
re-running deployments doesn't fix the issue
creating a new branch with the same head(failing commit) and running another deployment doesn't fix the issue
changing between preview/production deployments has no impact on this issue
Here is a dummy commit I added to make the docs build correctly
Commit one result vs. Commit two result
The diff in deployment logs old commits, the left is Commit 1(not working), and the right, Commit two, correctly builds all three tasks and the releases.
https://www.diffchecker.com/ZpV8vE9D
I have tried making different branches and re-run deployments to check whether the sphinx version will change, but it seems like it's bounded the the "old commit". This is also an issue for other Cloudflare Pages, and using preview/production deployments has no impact on this problem.
The issue in this case was actually not with sphinx version but with the fact that I was using:
git fetch --all
Which does not guarantee pulling the tags with it.
The --all pull from all remotes instead of "everything" as I thought.
Using git fetch --tags instead fixed the issue
Indeed it seems to be a bug, in this case I recommend you to consult directly with CloudFlare support, sometimes they are errors that remain internally in your account and you unbug it, CloudFlare Pages for the moment continues to improve its system, there are details to be corrected.
I'm currently in the process of moving a gitolite (3) installation between two
servers. Thankfully, this process is pretty well
documented on the main
project website. However, my repositories makes pretty active use of
git-annex which stores data in various
remotes as well as on the server itself.
Now, I'm not an expert on git-annex, but I know it works a bit differently from
"regular" git, so is there anything one should keep in mind when moving this
kind of installation or does it work just as outlined in the gitolite
documentation above?
After quite a bit of research, I couldn't find any details on how this should
done on a git-annex enabled repository so I decided to simply try it
out. Apparently, the steps as they are written work just fine, even for
git-annex content. That said, be cautious as you're moving stuff. Once the new
server is ready to take over, make sure the old one is disabled, I don't think
the git-annex likes to find 2 identical remotes.
As a minor anecdote: I accidently forgot to chown/chmod the repositories but
re-running step 6 and onwards without any issues what-so-ever.
I started running GitLab CE inside of an x86 Debian VM locally about two years ago, and last year I decided to migrate the GitLab CE instance to a dedicated Intel NUC server. Everything appeared to go well with no issues, and my GitLab CE instance is up-to-date as of today (running 13.4.2).
I discovered recently though, that some repos that were moved give a "NO REPOSITORY!" error when visiting their project pages, and if they had any issue boards, merge requests, etc, that these were also gone. But you wouldn't suspect it since the broken repos appear in the repo lists along with working repos that I use all the time.
If I had to reason about these broken repos, it would be that they had their last activity over a year ago, with either no pushes ever made to them other than an initial push, or if changes were made, issues created, or merge requests created, it was literally over a year ago.
Some of these broken repos are rather large with a lot of history, whereas others are super tiny (literally just tracking changes to a shell script), so I don't think repo size itself has anything to do with it.
If I run the GitLab diagnostic check sudo gitlab-rake gitlab:check, everything looks good except for "hashed storage":
All projects are in hashed storage? ... no
Try fixing it:
Please migrate all projects to hashed storage
But then running sudo gitlab-rake gitlab:storage:migrate_to_hashed doesn't appear to complete (with something like six failed jobs in the dashboard), and running the "gitlab:check" again still indicates this "hashed storage" problem. I've also tried running sudo gitlab-rake gitlab:git:fsck and sudo gitlab-rake cache:clear but these commands don't seem to make a difference.
Luckily I have the latest versions of all the missing repos on my machine, and in fact, I still have the original VM running GitLab CE 12.8.5 (with slightly out of date copies of the repos.)
So my questions are:
Is it possible to "repair" the broken repos on my current instance? I suspect I could just "re-push" my local copies of these repos back up to my server, but I really don't want to lose any metadata like issues / merge requests and such.
Is there any way to resolve the "not all projects are in hashed storage" issue? (Again the migrate_to_hashed task fails to complete.)
Would I be able to do something like "backup", "inspect / tweak backup", "restore backup" kind of thing to fix the broken repos, or at least the metadata?
Thanks in advance.
Okay, so I think I figured out what happened.
I found this thread on the GitLab User Forums.
Apparently the scenario here is:
Have a GitLab instance that has repos not in "hashed storage"
Backup your repo
Restore your repo (either to the same server or migrating to another server)
Either automatically or manually, attempt to update your repos to "hashed storage"
You'll now find that any repo with a "ci runner" (continuous integration runner) will now be listed as "NO REPOSITORY!" and be completely unavailable, since the "hashed storage" migration process will fail
The fix is to:
Reset runner registration tokens as listed in this article in the GitLab documentation
Re-run the sudo gitlab-rake gitlab:storage:migrate_to_hashed process
Once the background jobs are completed, run sudo gitlab-rake gitlab:check to ensure the output contains the message:
All projects are in hashed storage? ... yes
If successful, the projects that stated "NO REPOSITORY!" should now be fully restored.
A key to know if you need to run this process is if you:
Log in to your GitLab CE instance as an admin
Go to the Admin Area
Look under Monitoring->Background Jobs->Dead
and see a job with the name
hashed_storage:hashed_storage_project_migrate
with the error
OpenSSL::Cipher::CipherError:
I am facing a problem with TortoiseSVN (my client version is 1.6.16 and the SVNversion is 1.4.6.28521).
The projectA project has the classical architecture, with three folder: trunk, branches and tags.
I have rights to Read and Write from a projectA folders (tags,branches and trunk).
While working in the trunk, there is no issue, everything works fine. The only problem is that when a release time comes (or branching time comes), and I want to create a tag (a branch), I want to use the TortoiseSVN dialog "branch/tag". Then I choose the origin from the trunk or the revision o nthe trunk i need, and choose the "To URL" like "http://..../projectA/tags/v2.0".
After clicking "OK", it will let me know that the access to "http://...../projectA/" is forbidden.
The only solution right now, is to checkout the "projectA/tags" folder to a local folder. Then in this "projectA/tags" folder I will create one new folder with the name of the tag I want to create, and I am able to commit it without any problem.
I don't want to manually create the folder of the tag/branch like this, andwould rather like to use the "branch/tag" feature of TortoiseSVN.
Anyone has an idea about this issue ?
There is a recommendation in Subversion (at least on Windows with TortoiseSVN) to use the same major version as the server. You are allowed to ignore that recommendation, and most of the time, it does not hurt, but here you may have a case where it makes a difference. You should at least check if
the server could be upgraded to 1.6.x XOR
the client (your installation) could be downgraded to 1.4.x
However, your client will ( in the second case ) no longer work with your checkout directories. Branching has changed a lot from version 1.4.x to 1.6.x, so you will face a hard time if you have to use a 1.6.x client with a 1.4.x server.
I just logged onto http://www.ezsvn.com, that hosts my SVN repository. I have been paying monthly for hundreds of commits.
They're shutting down, and their support is nonexistent.
Can I get a backup of my repository from my machine? I’m using Windows.
If you have shell access:
http://wiki.archlinux.org/index.php/Subversion_backup_and_restore
If you don't have shell access (look at both the original answer and also the comments re: svnsync):
http://moelhave.dk/2006/07/remote-mirroring-a-subversion-svn-repository/
If you have access to run svnadmin on their server, it'll be no problem, and I see Dav has already linked to instructions for that.
Now, if you don't have access to run svnadmin, as far as I know it's not possible to use the SVN client itself (maybe TortoiseSVN for you) to copy the entire repository. (EDIT: never mind, I guess that was wrong. I'll leave the git info here just for the fun of it though.) But you can convert a whole Subversion repository to git, and here are instructions for doing that: http://pauldowman.com/2008/07/26/how-to-convert-from-subversion-to-git/ From there, you might be able to convert the git repository back into an SVN repository on another server. I know it's not really the answer you were looking for but if nothing else works, it will at least let you save your project's history in some form. (And hey, you could take it as an excuse to switch to distributed version control, which is all the rage these days)
If you really want/need the full history of your repository, you'll have to either get a dumpfile from the provider or get it yourself - some of the responses so far have addressed this already.
Another option: if you are not concerned with past revisions, but want your repo at it's latest state, just checkout the head revision, and export it to a separate location on your computer. That way, you have all your work to this point. You could then keep that as a backup, or possibly create an SVN account elsewhere, and import the exported copy into a fresh repo, then you would be back in business.