GitLab Backup no space left on device - gitlab

I have to migrate our old GitLab server to another one. Therefore I tried to create a backup using the following command:
sudo gitlab-rake gitlab:backup:create SKIP=uploads,artifacts,builds
During backing up the repositores I get the following error:
Error No space left on device
rake aborted!
Backup::Error: Backup operation failed:
gzip: stdout: No space left on device
Does someone know a way to create a backup if the server doesn't have enough space left? Is there a way to exclude specific repositories for a backup so I could split it into many backups?

Related

How do I make this perforce swarm error go away in p4v?

Could not connect to Swarm. 'Host swarm.mygame.com not found' - See P4V's log file for more information.
I have this above error which prints every time I load up p4v. This is an artifact from a previous project I was working on. How can I turn this off?
Run "p4 property -l P4.Swarm.URL".
Do you have Swarm? If not, delete the property P4.Swarm.URL.
If yes, what is your Swarm URL and does it match the output from "p4 property -l P4.Swarm.URL".
If the property is correct (swarm.mygame.com) then your machine can't access that hostname. Run "nslookup swarm.mygame.com".

Delete old backups from gitlab

I have configured gitlab.rb file and reconfigured gitlab server gitlab-ctl reconfigure to apply configuration changes:
I generated a gitlab backup with the following command:
gitlab-backup create
In the firts try, 6 old backups have been deleted. However, I have more backups in etc/gitlab/config_backup folder. I have made a second try with the backup creation command and it did not delete any old backup:
In etc/gitlab/config_backup folder lot of old backups still remain:
BTW, the date configuration of the server is correct:
What can I do in order to delete all the old backups? Do I need to remove them manually?
It appears your backup name is different- note how your Creating backup archive: XXXXX does not match any of your gitlab_config_XXX.tar backup names.
I would hazard that you have some other backup task that is backing up your /etc/gitlab folder (which is never backed up by gitlab-backup as you can see in your first screen capture.)
It would also help if you grabbed your gitlab_rails['backup_path'] = "/path/here" and verified your backup location which most likely is not and should not be /etc/gitlab.
I have found a similar issue and had to pass the "--delete-old-backups" parameter/argument to get the old backups to purge.
gitlab-ctl backup-etc --delete-old-backups
This wasn't required with the main "gitlab-backup create" call, just with the "gitlab-ctl backup-etc" in my case.

Unable to provision OpenEBS volume on RancherOS

I am using Rancher v2 as the k8s management platform and running RancherOS nodes on VMware vSphere. I manually installed open-iSCSI and mounted a 50GB volumes on the worker nodes for use by OpenEBS (will have to figure out how to automate that on node creation). I also created a cStor storage class and that all looks good. However, I have not been able to get a container to provision a pv using a pvc.
Warning FailedMount Unable to mount volumes for pod "web-test-54d9845456-bc8fc_infra-test(10f856c1-6882-11e9-87a2-0050568eb63d)": timeout expired waiting for volumes to attach or mount for pod "infra-test"/"web-test-54d9845456-bc8fc". list of unmounted volumes=[cstor-vol-01]. list of unattached volumes=[web-test-kube-pvc vol1 man-volmnt-01 cstor-vol-01 default-token-lxffz]
Warning FailedMount MountVolume.WaitForAttach failed for volume "pvc-b59c9b5d-6857-11e9-87a2-0050568eb63d" : failed to get any path for iscsi disk, last err seen: iscsi: failed to sendtargets to portal 10.43.48.95:3260 output: iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not add new discovery record. , err exit status
I have followed below steps to enable iSCSI on RancherOS from Prerquisitie section for RancherOS from OpenEBS documentation.
sudo ros s up open-iscsi
sudo ros config set rancher.services.user-volumes.volumes [/home:/home,/opt:/opt,/var/lib/kubelet:/var/lib/kubelet,/etc/kubernetes:/etc/kubernetes,/var/openebs]
sudo system-docker rm all-volumes
sudo reboot
From the github repository of Rancher OS, found that we need to create a lock directory, and make sure to create this directory every boot using the following way
$ mkdir /run/lock
# update cloud-config
#cloud-config
runcmd:
- [mkdir, /run/lock]
Reference path: github repo of rancher. Then find issue number 2435 under rancher/OS

Kubernaties unable to mount NFS FS on Google Container Engine

I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations

Cassandra moving data_file_firectories

Regarding the location of cassandra created data files and system files, I need to move the "commitlog_directory", "data_file_directories" and "saved_caches_directory" which have settings in the "cassandra.yaml" config file. It is currently at the default location "/var/lib/cassandra". The data is only some test data and of course the system generated keyspaces which are
dse_perf
dse_system
OpsCenter
system
system_traces
There are also the commitlog and saved_caches.db to move.
I am thinking of moving the keyspace directories with linux shell commands but I'm very unsure if they will become corrupt somehow. There is simply no space in the default drive and we need to move everything to the secondary and tertiary mounted drives.
Right now I'm in the process of moving all the files and resetting the yaml settings.
I have two questions -
Regarding the cassandra.yaml file, are there any other files besides this that are depended upon to have the location of the commitlog_directory and data_file_directories and saved_caches_directory, and their 'wrong location' will cause failure once I move all these files? I am also concerned the files (like the db files) inside the tables themselves have references to their own location and cause failure once they are moved.
If I just move the three settings commitlog_directory and data_file_directories and saved_caches_directory, will dse/cassandra actually create all the system keyspaces (system_traces, dse_perf, system, OpsCenter, dse_system), and the commitlof and the saved_caches.db, and will any other upstream config files be out of sync with that (same as first part of question 1)?
It is a very new installation so re installing would not be the end of the world but I realllly don't want to because we have kerberos and all kinds of other stuff on top of this cluster now.
This OS is ubuntu 14.0.4 and the DSE version is 4.7.
I just finished doing this. My instances are in AWS EC2 so your process may vary, but in essence:
create a new volume and attach it to the instance. my new device was
/dev/xvdg.
create new mount point sudo mkdir /new_data
format the new volume sudo mkfs -t ext4 /dev/xvdg
edit /etc/fstab so that your mount will survive reboots and add this
line /dev/xvdg /new_data ext4 defaults,nofail,nobootwait 0 2
mount the new volume sudo mount -a
make the new directories sudo mkdir -p
/new_data/lib/cassandra/commitlog
chown the ownership sudo chown -R cassandra:cassandra
/new_data/lib/cassandra
change cassandra.yaml to point to the new dirs
drain the node. if you're moving the data dir, copy over the data
from the old location to the new location. if you're moving
commitlog only, just restart cassandra.
I was able to move all the files and the commitlog as well. I changed the yaml and pointed it to where I wanted it to go. Remember to run the following command afterward -
chown -R cassandra:cassandra
And voila! Everything is reading/writing as it should. Cassandra is neato.

Resources