I have gitlab server setup on my system. It is working fine with single git repositories. Now I want to push complete android source with all .git projects to this server. How to do that? Do I have to push all project individually?
I have the same problem, managing different AOSP releases for our hardware.
Please note that I choose not to have ALL AOSP repositories in our GitLab instance, but only the one that need customization. The other are cloned directly from google git (or local repo mirror to speed up clone).
What I did is having a group (aosp) for general purpose repository that might apply to different project. Having a custom group for a given AOSP customization, where I usually place only device/xxx sources and repo manifest.
The most annoying task here is to setup the aosp group with, usually 50 repositories. Here is what I did:
start from the standard AOSP source (repo init../repo sync)
apply patches from silicon vendor, add any new repos (usually you have at lease some device/yourbranch/yourdevice). Add this patches as new branches (so repo list works with my scripts)
with a couple of grep/awk parse repo list output to get changed repos
for those repo, with a couple of other scripts and a bit of python gitlab commands, create the project on your server
My script can be found in my gitlab project. You might need to adapt them to you own AOSP version.
HTH,
Andrea
You can try (3 years later) the latest GitLab 11.2 (August 22nd, 2018).
See "Support for Android project import":
Until now, importing complex project structures with multiple sub-structures was a tedious, time-consuming task.
With this release, we introduce support for manifest files for project imports.
A manifest XML file contains metadata for groups of repositories, allowing you to import larger project structures with multiple repositories in one go.
When creating a new project, there is a new option to choose a “Manifest file” as source of your project import on the “Import project” tab.
In addition, you can select from the list of individual projects in a subsequent step if you don’t want to import the complete project structure.
This improvement allows you to import the Android OS code from the Android Open Source Project (AOSP), as one exciting use case. You can also import other projects that use manifest files which meet our format requirements.
See issue.
See documentation.
Here's what I've found. In short, I don't think its viable to use gitlab to help host an aosp mirror.
My test was to use premade docker containers and try the website out.
(from: https://github.com/sameersbn/docker-gitlab )
What I found was that just like (bitbucket or github) you create a project that is tied to a single git. -- You would have to create a project for all
Step 1. Launch a postgresql container
docker run --name gitlab-postgresql -d \
--env 'DB_NAME=gitlabhq_production' \
--env 'DB_USER=gitlab' --env 'DB_PASS=password' \
--volume /srv/docker/gitlab/postgresql:/var/lib/postgresql \
quay.io/sameersbn/postgresql:9.4-5
Step 2. Launch a redis container
docker run --name gitlab-redis -d \
--volume /srv/docker/gitlab/redis:/var/lib/redis \
quay.io/sameersbn/redis:latest
Step 3. Launch the gitlab container
docker run --name gitlab -d \
--link gitlab-postgresql:postgresql --link gitlab-redis:redisio \
--publish 10022:22 --publish 10080:80 \
--env 'GITLAB_PORT=10080' --env 'GITLAB_SSH_PORT=10022' \
--env 'GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alpha-numeric-string' \
--volume /srv/docker/gitlab/gitlab:/home/git/data \
quay.io/sameersbn/gitlab:8.0.5
Related
I have to build a simple app which reads a text file and process it's content (like remove multiple spaces, process words etc) but my I am confused about the first part of my homework.
"Initialize a git repository in a docker container then implement an app...."
I use Debian, I installed docker and git and I studied about it. From what I read I have to create a Dockerfile which will contain some instructions then I build the image and then run the container, run?
But I am still confused, what is the order of these thigs? Can I go firstly and write the app in Intelij and then to create that Dockerfiler? Or I have to create first the container then to code the app? But how I build the container? I read a lot about this, can you give me some advice? I mention that after every app "task" (read text file, process text etc) I have to execute git add, git commit and git push (if it helps for answer)
If the instruction says to "Initialize a Git repository in a docker container" then you are expected to:
run e.g. a Debian container
if Git is not present install it
initialize the repo
write your app
submit homework
You could:
docker run \
--interactive --tty --rm \
--name=homework \
--volume=${PWD}/homework:/homework \
--workdir=/homework \
debian:buster-slim
This will run a Debian "buster" image as a container and should (!) give you a shell prompt in the container.
A directory /homework in the container will be mapped to your host machine's ${PWD}/homework and you will be in the /homework directory when the container starts. This means that you won't lose your work if you exit the container.
From within the container's prompt:
# pwd
/homework
# git
bash: git: command not found
# apt update && apt install -y git
...
done.
# git
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
<command> [<args>]
# git init
Initialized empty Git repository in /homework/.git/
Notes
If you exit the container, you can rerun the docker run ... command to return to it.
When you exist the container, you can ls -la ${PWD}/homework to see the .git directory (at least) persisted on your host.
Ensure you run it from the same directory where it created ${PWD}/homework. Or revise the --volume=...
I'd recommend an overall workflow of
Build the application, without Docker; then
Package it in a Docker image (if it makes sense to).
You should be able to build the application totally normally. Whatever language you're using to build the application, make sure to use its normal packaging tools. For example, your package.json/Gemfile/requirements.txt/go.mod should list out all of the library dependencies your application needs to run. Run it locally, write appropriate unit tests for it, and generally build something that works.
Once it works, then push it into Docker. You'll need to write a Dockerfile that builds the image. A generic recipe for this is
FROM language-base-image # python:3.9, node:14, ...
WORKDIR /app
COPY dependencies-file . # requirements.txt, package.json, ...
RUN install the dependencies # pip install, npm install, ...
COPY . .
RUN build the application # npm run build, ...
CMD ./the_application # npm run start, ...
You should then be able to docker build an image, and docker run a container from the resulting image. The Docker documentation includes a sample application that runs through this sequence.
Note in particular that the problem task of "read a text file" is substantially harder in Docker than without. You need to use a bind mount to give access to the host filesystem to the container, and then refer to the container-side path. For example,
docker run --rm -v $PWD/data:/data my-image \
./the_application --input /data/file.txt
I would not bother trying to use Docker as my primary development environment, especially for an introductory project. Docker is designed as an isolation system and it's intentionally tricky to work with host files from a container, and vice versa. Especially if you can use a fairly routine programming language that you can easily install with apt-get or brew, and you don't have tricky host-library dependencies, it's substantially easier to do most of your development in an ordinary host build environment use Docker only at a late stage.
I am still relatively new to docker. I have two git repos. One is a Next.js application and the other a nodejs app. I need to create a docker container but when building, I need to build the next.js code and move the build folder to the node app before creating an image. Not sure if this is possible.
I am not sure if this is the best route to take either. The end goal is to push the docker containers to AWS ECS.
Background, the next.js is a server rendered react framework. So in QA and PROD the node app serves the content.
By issuing RUN directives, executed commands are committed in new layers on top of the current image. The concept behind Docker is to keep your convergence steps under source control as so containers can be created from any point in time in an image's history. In this scenario, using a set of RUN instructions will commit each step as an individual layer.
WORKDIR /project
RUN git clone https://github.com/foo/next-js.git
RUN git clone https://github.com/baz/nodejs.git
WORKDIR /project/next-js
RUN npm run build
RUN cp ./build ../nodejs/
You could instead incorperate the above into a bash script and bypass Docker's layering mechanism.
COPY ./setup.sh /
RUN chmod u+x /setup.sh
RUN /setup.sh
However, doing so would defeat the purpose of using Docker to begin with, unless for some reason you need to ensure your container receives a set of instructions in the form of one layer.
I've used gitlab omnibus installation version but my PC had broken so couldn't boot my PC now.
So I couldn't run gitlab and have to make the backup from this condition.
From Gitlab documentation, there is a description how to make a backup on gitlab running state but there isn't any description way to make a backup on not-running state.
(https://docs.gitlab.com/ee/raketasks/backup_restore.html)
The repository is already backuped and what I really want to make a backup is about gitlab support functions (e.g. issue, merge request and etc)
How could do this?
If possible, you would need to backup the data mounted by your GitLab omnibus image, and copy that data on a working PC, in order to run GitLab there.
Once you have a running GitLab on a new workstation, you can make a backup there.
This is my self-answer.
There was no way to backup without running gitlab because all of database data is related on progresql.
So I've installed another gitlab in docker on my PC and attached all of things to it.(config, repositories, database data)
Below is What I did
install gitlab on docker (MUST install specific version matched with original version)
https://docs.gitlab.com/omnibus/docker/
modify docker run script to connect your original data to gitlab in docker.
e.g.)
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume [USER_DIR]/gitlab/config:/etc/gitlab \
--volume [USER_DIR]/gitlab/logs:/var/log/gitlab \
--volume [USER_DIR]/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
run gitlab in docker
run backup in docker through omnibus package installed backup method
https://docs.gitlab.com/ee/raketasks/backup_restore.html#restore-for-omnibus-installations
e.g.)
docker exec -t gitlab-rake gitlab:backup:create
After done backup, find your backup file which specified in your
e.g.)
[USER_DIR]/etc/gitlab/gitlab.rb
I don't agree with all of your conclusions even if it holds as a solution. It all depends on your setup, and if you have all data on the same machine it is a setup with room for improvements.
My own setup provide both external PostgreSQL 9.x and Redis 5.x servers. The benefit with external servers and docker make it possible to backup / restore using only external servers and root access to a docker volume on a docker host. This solution involves less steps since DBs are external.
I have done this a number of times and it works, but should only be used if you know what you're doing. Some parts is same as you discovered, like reinstall the same version etc.
I just wanted to point out that more than one solution exist for this problem. However, one thing that would be more beneficial is if the Gitlab team focused on PostgreSQL 11.x compatibility as opposed to only 10.x compatibility. I have already tested 11.x successfully in a build from sources, but waiting for a release by the Gitlab Team.
I am happy you made it work!
In my team we have GitBlit like version control system, and we are interested by GitLab CICD plugins.
If convinced, in the future we could import alls projects but for the moment i would like to use GitlabCI keeping GitBlit like source control.
Is it possible ?
I try the "import project" functionnality but it create a new repository in Gitlab and cut the relation with GitBlit.
Thanks in advance
Your CI definition (.gitlab-ci.yml) is located in a Gitlab repository and is executed when you commit / push to that repository, so in order to really test and experience the full potential of, in my opinion, the awesome CI abilities of Gitlab you should just migrate a repo.
On the other hand I do understand that is not always so easy, so I also have an alternative for you (this is however, advanced usage):
Create a Gitlab repo with only a .gitlab-ci.yml file to define the build-steps you want to execute
Add a git clone <your GitBlit repo url> . command to your .gitlab-ci.yml's before_script to get your GitBlit code in a Gitlab CI job
Use triggers to run the pipeline when something is pushed to GitBlit by adding a hook in GitBlit that sends a POST request to Gitlab
EDIT based on comment:
A POST-request will look like this:
curl --request POST \
--form token=TOKEN \
--form ref=master \
https://myGitlab/api/v4/projects/1/trigger/pipeline
I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.