I installed Gitlab on my own Server(CentOS7) with docker and portainer and I create one user.
everything seems okay but when I clicked on the user URL I see this message:
I don't know why but it generates http://0de09c2e3bc1/Parisa_hr randomly. means 0de09c2e3bc1
on the other hand, when I want to clone my repo I have problems too. the URL that it generates is git#0de09c2e3bc1:groupname/projectname.git and http://0de09c2e3bc1/groupname/projectname.git
I got this error as I want to clone it :
ssh: Could not resolve hostname 0de09c2e3bc1: Temporary failure in
name resolution fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
I don't know which things make it create 0de09c2e3bc1, I think I should have seen my IP address.
I noticed that 0de09c2e3bc1 is the name of portainer because as I checked its console I see it.
root#0de09c2e3bc1:/#
now, how can I fix it?
I also changed external_url to https://IP:port of my server but it didn't work.
Double-check your external-url which is part of the generated URL on each query.
This gist about installing portainer and gitlab shows a docker run like:
docker run --detach \
--name gitlab \
--publish 8001:80 \
--publish 44301:443 \
--publish 2201:22 \
--hostname gitlab.c2a-system.dev \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.c2a-system.dev/'; gitlab_rails['gitlab_shell_ssh_port'] = 2201;" \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
--restart unless-stopped \
gitlab/gitlab-ce:latest
See Pre-configure Docker container, using the environment variable GITLAB_OMNIBUS_CONFIG.
Then, for accessing a private repository like http://gitlab.c2a-system.dev/groupname/projectname.git, you will need to define a credential helper and store your PAT:
git config --global credential.helper cache
printf "host=gitlab.c2a-system.dev\nprotocol=http\nusername=YourGitLabAccount\npassword=YourGitLabToken"|\
git credential-cache store
Related
I am confused.
For now, I just want to self-host gitlab in my local home network without exposing it to the internet. Is this possible? If so can i do this without installing ca-certificates?
Why is gitlab force (?) me to expose my gitlab server to the internet?
Nothing else I've locally installed my NAS/Server requires ca certificates for me to connect to its webservice?: I can just go to xyz.456.abc.123:port in chrome
e.g. in this article, the public url is referenced: https://www.cloudsavvyit.com/2234/how-to-set-up-a-personal-gitlab-server/
You don't need to install certificates to use GitLab and you do not have to have GitLab exposed to the internet to have TLS security.
You can also opt to not use TLS/SSL at all if you really want. In fact, GitLab does not use HTTPS by default.
Using docker is probably the easiest way to demonstrate it's possible:
mkdir -p /opt/gitlab
export GITLAB_HOME=/opt/gitlab
docker run --detach \
--hostname localhost \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
-e GITLAB_OMNIBUS_CONFIG='external_url "http://localhost"' \
gitlab/gitlab-ee:latest
# give it 15 or 20 minutes to start up
curl http://localhost
You can replace http://localhost in the external_url configuration with the computer hostname you want to use for your local server or even an IP address.
I am using Sematext to monitor a small composition of Docker containers plus the Logsene feature to gather the web traffic logs from one container running Node Express web application.
It all works fine until I update and restart the web server container to pull in a new code build. At this point, Sematext Logsene seems to get detached from the container, and so I lose the HTTP log trail in the monitoring. I still see the Docker events, so it seems only the logs part which is broken.
I am running Sematext "manually" (i.e. it's not in my Docker Compose) like this:
sudo docker run -d --name sematext-agent --restart=always -e SPM_TOKEN=$SPM_TOKEN \
-e LOGSENE_TOKEN=$LOGSENE_TOKEN -v /:/rootfs:ro -v /var/run/docker.sock:/var/run/docker.sock \
sematext/sematext-agent-docker
And I update my application simply like this:
docker-compose pull web && docker-compose up -d
where web is the web application service name (amongst database, memcached etc)
which recreates the web container and restarts it.
At this point Sematext stops forwarding HTTP logs.
To fix it I can restart Sematext agent like this:
docker restart sematext-agent
And the HTTP logs start arriving in their dashboard again.
So, I know I could just append the agent restart command to my release script, but I am wondering if there's a way to prevent it from becoming detached in the first place? I guess it's something to do with it monitoring the run files.
I have searched their documentation and FAQs, but not found anything specific about this effect.
I seem to have fixed it, but not in the way I'd expected.
While looking through the documentation I found the sematext-agent-docker package with the Logsene integration built-in has been deprecated and replaced by two separate packages.
"This image is deprecated.
Please use sematext/agent for monitoring and sematext/logagent for log collection."
https://hub.docker.com/r/sematext/sematext-agent-docker/
You now have to use both Logagent https://sematext.com/docs/logagent/installation-docker/ and a new Sematext Agent https://sematext.com/docs/agents/sematext-agent/containers/installation/
With these both installed, I did a quick test by pulling a new container image, and it seems that the logs still arrive in their web dashboard.
So perhaps the problem was specific to the previous package, and this new agent can "follow" the container rebuilds better somehow.
So my new commands are (just copied from the documentation, but I'm using env-vars for the keys)
docker run -d --name st-logagent --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-e LOGS_TOKEN=$SEMATEXT_LOGS_TOKEN \
-e REGION=US \
sematext/logagent:latest
docker run -d --restart always --privileged -P --name st-agent \
-v /:/hostfs:ro \
-v /sys/:/hostfs/sys:ro \
-v /var/run/:/var/run/ \
-v /sys/kernel/debug:/sys/kernel/debug \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-e INFRA_TOKEN=$SEMATEXT_INFRA_TOKEN \
-e CONTAINER_TOKEN=$SEMATEXT_CONTAINER_TOKEN \
-e REGION=US \
sematext/agent:latest
Where
CONTAINER_TOKEN == old SPM_TOKEN
LOGS_TOKEN == old LOGSENE_TOKEN
INFRA_TOKEN = new to me
I will see if this works in the long run (not just the quick test).
I am new to docker. I'm trying to get atmoz/sftp container work with Azure Storage.
My goal is to have multiple SFTP users who will upload files to their own folders which I can then find on Azure Storage.
I used the following command:
az container create \
--resource-group test \
--name testsftpcontainer \
--image atmoz/sftp \
--dns-name-label testsftpcontainer \
--ports 22 \
--location "East US" \
--environment-variables SFTP_USERS="ftpuser1:yyyy:::incoming ftpuser2:xxx:::incoming" \
--azure-file-volume-share-name test-sftp-file-share \
--azure-file-volume-account-name storagetest \
--azure-file-volume-account-key "zzzzzz" \
--azure-file-volume-mount-path /home
The container is created and run but when I unsuccessfully try to connect via Filezilla I get this in log:
Accepted password for ftpuser2 from 10.240.xxx.xxx port 64982 ssh2
bad ownership or modes for chroot directory component "/home/"
If I use /home/ftpuser1/incoming it works for one of the users.
Do I need to change permissions on the /home directory first? If so, how?
Of course, you can mount the Azure File Share to the container directory /home. And it works perfectly on my side:
And I also make a test with the image atmoz/sftp. And it also works fine. The command here:
az container create -g myResourceGroup \
-n azuresftp \
--image atmoz/sftp \
--ports 22 \
--ip-address Public \
-l eastus \
--environment-variables SFTP_USERS="ftpuser1:yyyy:::incoming ftpuser2:xxx:::incoming" \
--azure-file-volume-share-name fileshare \
--azure-file-volume-mount-path /home \
--azure-file-volume-account-name xxxxxx \
--azure-file-volume-account-key xxxxxx
Here is the screenshot:
Update:
With the requirements, the error shows the bad ownership and it's impossible to control the permissions when you mount the Azure file share to the path /home or /home/user right now. So I recommend you mount the Azure file share to the path /home/user/upload of every user and it will go to the same result as you need.
I could not find a solution to the problem. In the end I used another approach:
- I mounted the Azure storage into another unrelated folder /mount/sftpfiles
- After the container was built, I ran these commands:
apt update
apt-get -y install lsyncd
lsyncd -rsync /home /mnt/sftpfiles
They download a tool called lsyncd which watches for file system changes and copies files to another folder when a change occurs.
This solves my requirement but it has a side effect of duplicating all files (that's not a problem for me).
I'm still open to other suggestions that would help me make this cleaner.
I need deleting all information of tor browser.
Is possible to creating a docker image with normal browser with tor proxy and using it trought ssh -X options?
Runing it with --rm=true automaic deleting kontainer data and always using this same configuration.
Is possible using this continer in the cloud? For example in AWS ,Azure etc.?
Is possible to download directory mount in my host machine?
If you're on Linux or Mac you can do this.
See item 9 in Jess' blog post: Docker Containers on the Desktop:
docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
--device /dev/snd \ # sound
--name tor-browser \
jess/tor-browser
I know how to run a registry mirror
docker run -p 5000:5000 \
-e STANDALONE=false \
-e MIRROR_SOURCE=https://registry-1.docker.io \
-e MIRROR_SOURCE_INDEX=https://index.docker.io \
registry
and how to use it
docker --registry-mirror=http://10.0.0.2:5000 -d
But how can I use multiple registry mirror.
This is what I need:
Docker hub mirror
Google container registry mirror for k8s
Private registry
So I have to make tow registry mirror and a private registry.I want to docker run registry mirror 1st and 2nd registry, and one more docker run registry hold my private registry. The client will use three of these registry.
I have no clue of how to do this,I think this is a common use case, please help, thanks.
You can use a PullSecret to tell Kubernetes what registry to get your containers from. Please see:
http://releases.k8s.io/release-1.0/docs/user-guide/images.md#specifying-imagepullsecrets-on-a-pod