How to backup package registry in Gitlab - gitlab

I using self-managed GitLab to manage many java application. I also use gitlab package registry to store the artifacts (jar file) and use AWS S3 as the storage path. My company want to setup a plan for the gitlab backup. I review gitlab document: https://docs.gitlab.com/ee/raketasks/backup_restore.html. I don't see any mention relate to how we can backup the packages in package registry
I don't know when I restore gitlab with new instance, the new package registry will recognize my packages in S3?
Anyone has exp about this, please advise me. Thanks a lot!!!

Since you are storing your artifacts on S3, I believe they should just be available when you restore from backup. The new instance would still be pointing at the same S3 bucket. You should make sure the S3 retention polices are appropriate for your backup needs.
If you are storing your packages on the local filesystem, the Gitlab backup process doesn't currently include those files, though it does include the package metadata. In that case, you'll need to manually copy the packages directory at /var/opt/gitlab/gitlab-rails/shared/packages/ to the new server after restoring the metadata using the normal backup/restore process.
There is an open ticket for this in the Gitlab issue tracker, which is where I found the above workaround.

Related

How to delete old artifacts in Gitlab self hosted?

We have a self-hosted Gitlab running on one instance but every now and then we are facing space issues because the large artifacts filled up the space.
We have to go and delete the older artifacts folders manually.
Is there a way to automate this? May be a script which runs overnight and delete the artifacts folder older than say 7 days?
The default expiration is set to 5 days in Gitlab Admin but that does not mean they are deleted from the box.
When artifacts expire, they should be deleted from disk. If your artifacts are not deleted from your physical storage, there is a configuration issue with your storage. Ensure you have write and delete permissions on your storage configuration.
Artifacts that were created before the default expiration setting was set will still need to be deleted manually -- but one time. All new artifacts will respect the artifact expiration.
However, you should do this through the API, not directly on the filesystem. Otherwise there will be a mismatch between what GitLab's database thinks exists and what actually exists on disk.
For an example script: see this answer.
Also note there are several circumstances under which artifacts are kept, such as the latest artifacts. New pipelines must run for old artifacts to expire. See documentation for more information.

Terraform State migration

I started working with Terraform and realized that the state files were created and saved locally. After some searching I found that it is not recommended that terraform state files be committed to git.
So I added a backend configuration using S3 as the backend. Then I ran the following command
terraform init -reconfigure
I realize now that this set the backend as S3 but didn't copy any files.
Now when I run terraform plan, it plans to recreate the entire infrastructure that already exists.
I don't want to destroy and recreate the existing infrastructure. I just want terraform to recognize the local state files and copy them to S3.
Any suggestions on what I might do now?
State files are basically JSON files containing information about the current setup. You can manually copy files from the local to remote(S3) backend and use them without issues. You can read more about state files here: https://learn.hashicorp.com/tutorials/terraform/state-cli
I also manage a package to handle remote states in S3/Blob/GCS, if you want to try: https://github.com/tomarv2/tfremote

Azure DevOps extension cache wrong node_modules

General: I develop an Azure DevOps extension with tasks and pipeline decorators. Testing on local Azure DevOps Server instance. Extension loaded through manage extensions from local hard drive. Let's say that I installed the extension first time with version 1.0.0 and a node_modules dependency "3rdPartyDep" with version 2.0.0, which has transitive dependencies with vulnerabilities.
Scenario:
Upgrade "3rdPartyDep" to version 3.0.0 with fixed vulnerabilities. Build new version of my extension, say 1.0.1. Create the .vsix, update the extension in the Azure DevOps Server.
Run a pipeline, which fails because I did not check the "3rdPartyDep" changes and there are breaking changes and the extension fails to run.
Rollback the "3rdPartyDep" library to 2.0.0 because I have no time now to check what is broken in there right now as I have other things to debug and implement, repackage the extension, increase version to 1.0.2, update extension in Azure DevOps Server.
Run the pipeline. It fails with the same exception, as if I didn't rollback. I look into the agent taks folder and I see that the node_modules with the "3rdPartyDep" library is pointing to 3.0.0, which is wrong because I rolled back the version.
I open the generated .vsix archive and check that the node_modules inside contains the correct 2.0.0 version, so no problems of packaging or building from my side.
I make a conclusion that Azure DevOps stores somewhere a cached version of the extension with the node_modules including the wrong version of the "3rdPartyDep". I search that cache folder over internet to find out where it is, and I also search with a search tool all my machine, including words in file. Nowhere to be found. There is no location on my machine with such node_modules containing the 3.0.0 version. It might be stored in some encrypted DB?
I uninstall completely the extension, and install it back. I see that Azure DevOps has a history for the extension, and the cache is not cleared. Any pipeline fails, even if my .vsix does not contain this dependency.
I'm stuck.
Questions:
Where extensions are actually cached inside Azure DevOps Server?
Why updating, uninstalling and installing does not fix the problem?
Is there any way to fix this? What can I do? I do not want to reinstall the server completely. Moreover, this raises concerns about how node_modules are managed and cached and what happens at the clients and the cloud.
You could try the following items:
Try to clean the browser cache, and check whether you have increase the version number in the task.json.
Try to perform Delete task -- Save definition -- add task again process.
Delete Azure DevOps Server cache, which can be followed in this link.
Uninstall the extension from CollectionSettings, remove the extension from local Manage Extensions. Then upload again the extension and install it in the collection.

omnibus or source - can't decide which one to use for gitllab backup/restore

I am using ubuntu server running gitlab server.
I need to perform daily backup/restore of my gitlab.
Which method should I prefer: omnibus or from source?
How can I check weather GitLab is installed via omnibus or from source?
Source or Omnibus, you will have access to the same backup procedure, which will create an archive file that contains the database, all repositories and all attachments.
That means you are saving the data itself, not the all system.
For the system, note the version of the omnibus package you are installing, and you will be able to re-install it in minutes.
How can I check weather GitLab is installed via omnibus or from source?
See if your gitlab root folder has a .git in it: that would mean it represents a clone from the sources.

How to migrate GitLab to a new server?

I am trying to migrate an GitLab setup from 7.8.2 to 7.12.2. I am not really sure how to go about this. I have installed a new box, on Ubuntu 14.04.2.
Now I would really like to just export the old user/group database and import it on the new server, then copy all the repositories from the old server to the new one. And tell the users to start using the new one.
I do not know which database my new gitlab installation uses, neither the old one.
I have been up and down the gitlab documentation, but cannot find sufficient information on how to migrate from one server to another.
I followed the instructions on https://about.gitlab.com/downloads/ for ubuntu, and all seems to work fine. I am looking for a way to export the users/groups from the old gitlab box and import it on the new gitlab box. and then just copy all the repositories from the old box to the new one.
Any assistance? I know next to nothing about gitlab :(
I would take the following steps
Find out if gitlab is installed by hand or with gitlab-omnibus. This you need to know for the exact backup and update steps.
Do a backup of the old version just to be safe
Update the current 7.8.2 instance to 7.12.2 instance by following the update guideline
Back up the newly updated gitlab system
Restore the backup on the new system
Backup & restore documentation can be found here

Resources