How to make backup of gitlab without running? - gitlab

I've used gitlab omnibus installation version but my PC had broken so couldn't boot my PC now.
So I couldn't run gitlab and have to make the backup from this condition.
From Gitlab documentation, there is a description how to make a backup on gitlab running state but there isn't any description way to make a backup on not-running state.
(https://docs.gitlab.com/ee/raketasks/backup_restore.html)
The repository is already backuped and what I really want to make a backup is about gitlab support functions (e.g. issue, merge request and etc)
How could do this?

If possible, you would need to backup the data mounted by your GitLab omnibus image, and copy that data on a working PC, in order to run GitLab there.
Once you have a running GitLab on a new workstation, you can make a backup there.

This is my self-answer.
There was no way to backup without running gitlab because all of database data is related on progresql.
So I've installed another gitlab in docker on my PC and attached all of things to it.(config, repositories, database data)
Below is What I did
install gitlab on docker (MUST install specific version matched with original version)
https://docs.gitlab.com/omnibus/docker/
modify docker run script to connect your original data to gitlab in docker.
e.g.)
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume [USER_DIR]/gitlab/config:/etc/gitlab \
--volume [USER_DIR]/gitlab/logs:/var/log/gitlab \
--volume [USER_DIR]/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
run gitlab in docker
run backup in docker through omnibus package installed backup method
https://docs.gitlab.com/ee/raketasks/backup_restore.html#restore-for-omnibus-installations
e.g.)
docker exec -t gitlab-rake gitlab:backup:create
After done backup, find your backup file which specified in your
e.g.)
[USER_DIR]/etc/gitlab/gitlab.rb

I don't agree with all of your conclusions even if it holds as a solution. It all depends on your setup, and if you have all data on the same machine it is a setup with room for improvements.
My own setup provide both external PostgreSQL 9.x and Redis 5.x servers. The benefit with external servers and docker make it possible to backup / restore using only external servers and root access to a docker volume on a docker host. This solution involves less steps since DBs are external.
I have done this a number of times and it works, but should only be used if you know what you're doing. Some parts is same as you discovered, like reinstall the same version etc.
I just wanted to point out that more than one solution exist for this problem. However, one thing that would be more beneficial is if the Gitlab team focused on PostgreSQL 11.x compatibility as opposed to only 10.x compatibility. I have already tested 11.x successfully in a build from sources, but waiting for a release by the Gitlab Team.
I am happy you made it work!

Related

create an app in a docker container (confused about tasks order)

I have to build a simple app which reads a text file and process it's content (like remove multiple spaces, process words etc) but my I am confused about the first part of my homework.
"Initialize a git repository in a docker container then implement an app...."
I use Debian, I installed docker and git and I studied about it. From what I read I have to create a Dockerfile which will contain some instructions then I build the image and then run the container, run?
But I am still confused, what is the order of these thigs? Can I go firstly and write the app in Intelij and then to create that Dockerfiler? Or I have to create first the container then to code the app? But how I build the container? I read a lot about this, can you give me some advice? I mention that after every app "task" (read text file, process text etc) I have to execute git add, git commit and git push (if it helps for answer)
If the instruction says to "Initialize a Git repository in a docker container" then you are expected to:
run e.g. a Debian container
if Git is not present install it
initialize the repo
write your app
submit homework
You could:
docker run \
--interactive --tty --rm \
--name=homework \
--volume=${PWD}/homework:/homework \
--workdir=/homework \
debian:buster-slim
This will run a Debian "buster" image as a container and should (!) give you a shell prompt in the container.
A directory /homework in the container will be mapped to your host machine's ${PWD}/homework and you will be in the /homework directory when the container starts. This means that you won't lose your work if you exit the container.
From within the container's prompt:
# pwd
/homework
# git
bash: git: command not found
# apt update && apt install -y git
...
done.
# git
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
<command> [<args>]
# git init
Initialized empty Git repository in /homework/.git/
Notes
If you exit the container, you can rerun the docker run ... command to return to it.
When you exist the container, you can ls -la ${PWD}/homework to see the .git directory (at least) persisted on your host.
Ensure you run it from the same directory where it created ${PWD}/homework. Or revise the --volume=...
I'd recommend an overall workflow of
Build the application, without Docker; then
Package it in a Docker image (if it makes sense to).
You should be able to build the application totally normally. Whatever language you're using to build the application, make sure to use its normal packaging tools. For example, your package.json/Gemfile/requirements.txt/go.mod should list out all of the library dependencies your application needs to run. Run it locally, write appropriate unit tests for it, and generally build something that works.
Once it works, then push it into Docker. You'll need to write a Dockerfile that builds the image. A generic recipe for this is
FROM language-base-image # python:3.9, node:14, ...
WORKDIR /app
COPY dependencies-file . # requirements.txt, package.json, ...
RUN install the dependencies # pip install, npm install, ...
COPY . .
RUN build the application # npm run build, ...
CMD ./the_application # npm run start, ...
You should then be able to docker build an image, and docker run a container from the resulting image. The Docker documentation includes a sample application that runs through this sequence.
Note in particular that the problem task of "read a text file" is substantially harder in Docker than without. You need to use a bind mount to give access to the host filesystem to the container, and then refer to the container-side path. For example,
docker run --rm -v $PWD/data:/data my-image \
./the_application --input /data/file.txt
I would not bother trying to use Docker as my primary development environment, especially for an introductory project. Docker is designed as an isolation system and it's intentionally tricky to work with host files from a container, and vice versa. Especially if you can use a fairly routine programming language that you can easily install with apt-get or brew, and you don't have tricky host-library dependencies, it's substantially easier to do most of your development in an ordinary host build environment use Docker only at a late stage.

How to list all files accessed after running docker?

I have to deal with some very large vendor support packages for embedded development —- I’ve used docker successfully just as a means of keeping their installs segmented away from the rest of my system and for the sake of environment reproducibility. That works great, but often these installs are monoliths, including a ton of files and functionality I don’t need, especially in a CI environment. And moving giant, slow-to-recreate docker images around is a pain.
So, in the interest of teasing out just the features I need, and porting them to a much smaller image, I’m wondering:
Can I run a docker image, performing some CI-relevant task, and then find all the files that were accessed in the duration the docker image was running?
The plan after that would be to copy all those files into a tarfile or similar, then use that for specialized images in the future. So as an alternative question... is that plan worth pursuit?
Thanks :) -Chloë
Maybe it will not answer exactly to your question, however it may help.
You can check what is happening in the container by
checking its logs through the docker container logs command.
checking the modification performed in its filesystem through the docker diff command.
Here is an example
# run a ubuntu container
$ docker run -it --rm --name focal ubuntu:focal
# run a command in the container
$ echo "test" > test.txt
# messages in the logs
$ docker container logs --follow --details focal
# root#aa86b4988bfe:/# echo "test" > test.txt
# checking the differences
$ docker diff focal
# A /test.txt

Docker compose file for linux Debian and Arch

Hope I'm doing this correctly...
First of we are using docker-compose with a yml file.
Looking something like that:
sudo docker-compose -f docker-compose.yml up -d
In the yml file we have something similar to:
version: '3.4'
services:
MyContainer:
image: "MyContainer:latest"
container_name: MyContainer
restart: always
environment:
- DISPLAY=unix$DISPLAY
- QT_X11_NO_MITSHM=1
devices:
- /dev/dri:/dev/dri
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw
- /dev/dri:/dev/dri
- /usr/lib/x86_64-linux-gnu/libXv.so.1:/usr/lib/x86_64-linux-gnu/libXv.so.1:rw
- ~/MyFiles/:/root/Myfiles
- ~:/root/home
Now the problem starts. We have two operating systems used by the team. One time Ubuntu and then Arch and Manjaro. As a experienced Linux User might know this will not work on Arch. Because x86_64-linux-gnu is a folder in Ubuntu. This is a very specific folder on Debian/Ubuntu systems. The equivalent on Arch/Manjaro and nearly every other Linux Distro is /usr/lib or /usr/lib64.
Of course a hack would be to make this folder to link into lib, but I don't want to do that for every new team-member/machine without Ubuntu.
So these are all the upfront information to give.
My Question is:
What is the best approach in your opinion to solve this problem?
I had a google search, but either I used the wrong keywords, or people don't have that problem, because they design their containers smarter.
I know that there are docker volumes that can be created and then used in the docker-compose file but for that, we would need to rerun the setup on all PC's, Laptops and Servers we have, would like to avoid that if possible...
I have a lot to learn so if you have more experience and knowledge please be so kind and explain me my mistakes.
Regards,
Stefan
If you're trying to use the host display, host libraries, host filesystem, and host hardware devices, then the only thing you're getting out of Docker is an inconvenient packaging mechanism that requires root privileges to run. It'd be significantly easier to build a binary and run the application directly on the host.
If you must run this in Docker, the image should be self-contained: all of the code and libraries necessary to run the application needs to be in the image and copied in the Dockerfile. Most images start FROM some Linux distribution (maybe indirectly through a language runtime) and so you need to install the required libraries using its package manager.
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
libxv1
...
Bind-mounting binaries or libraries into containers leads to not just filesystem inconsistencies like you describe but in some cases also binary-compatibility issues. The bind mount won't work properly on a MacOS host, for instance. (Earlier recipes for using the Docker socket inside a Docker container recommended bind-mounting /usr/bin/docker into the container, but this could hit problems if a CentOS host Docker was built against different shared libraries than an Ubuntu container Docker.)
volumes section in docker compose supports environment variables. You can make use of that and it will be machine specific.

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

Uploading AOSP Source to private Gitlab Server

I have gitlab server setup on my system. It is working fine with single git repositories. Now I want to push complete android source with all .git projects to this server. How to do that? Do I have to push all project individually?
I have the same problem, managing different AOSP releases for our hardware.
Please note that I choose not to have ALL AOSP repositories in our GitLab instance, but only the one that need customization. The other are cloned directly from google git (or local repo mirror to speed up clone).
What I did is having a group (aosp) for general purpose repository that might apply to different project. Having a custom group for a given AOSP customization, where I usually place only device/xxx sources and repo manifest.
The most annoying task here is to setup the aosp group with, usually 50 repositories. Here is what I did:
start from the standard AOSP source (repo init../repo sync)
apply patches from silicon vendor, add any new repos (usually you have at lease some device/yourbranch/yourdevice). Add this patches as new branches (so repo list works with my scripts)
with a couple of grep/awk parse repo list output to get changed repos
for those repo, with a couple of other scripts and a bit of python gitlab commands, create the project on your server
My script can be found in my gitlab project. You might need to adapt them to you own AOSP version.
HTH,
Andrea
You can try (3 years later) the latest GitLab 11.2 (August 22nd, 2018).
See "Support for Android project import":
Until now, importing complex project structures with multiple sub-structures was a tedious, time-consuming task.
With this release, we introduce support for manifest files for project imports.
A manifest XML file contains metadata for groups of repositories, allowing you to import larger project structures with multiple repositories in one go.
When creating a new project, there is a new option to choose a “Manifest file” as source of your project import on the “Import project” tab.
In addition, you can select from the list of individual projects in a subsequent step if you don’t want to import the complete project structure.
This improvement allows you to import the Android OS code from the Android Open Source Project (AOSP), as one exciting use case. You can also import other projects that use manifest files which meet our format requirements.
See issue.
See documentation.
Here's what I've found. In short, I don't think its viable to use gitlab to help host an aosp mirror.
My test was to use premade docker containers and try the website out.
(from: https://github.com/sameersbn/docker-gitlab )
What I found was that just like (bitbucket or github) you create a project that is tied to a single git. -- You would have to create a project for all
Step 1. Launch a postgresql container
docker run --name gitlab-postgresql -d \
--env 'DB_NAME=gitlabhq_production' \
--env 'DB_USER=gitlab' --env 'DB_PASS=password' \
--volume /srv/docker/gitlab/postgresql:/var/lib/postgresql \
quay.io/sameersbn/postgresql:9.4-5
Step 2. Launch a redis container
docker run --name gitlab-redis -d \
--volume /srv/docker/gitlab/redis:/var/lib/redis \
quay.io/sameersbn/redis:latest
Step 3. Launch the gitlab container
docker run --name gitlab -d \
--link gitlab-postgresql:postgresql --link gitlab-redis:redisio \
--publish 10022:22 --publish 10080:80 \
--env 'GITLAB_PORT=10080' --env 'GITLAB_SSH_PORT=10022' \
--env 'GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alpha-numeric-string' \
--volume /srv/docker/gitlab/gitlab:/home/git/data \
quay.io/sameersbn/gitlab:8.0.5

Resources