New user to GitLab and trying to set my Project up for the first time.
I've setup Gitlab with docker and think (I've setup a local server for it using docker??).
I've then gone created my project and added a SSH key but when I try to use the command ssh -T git#gitlab.com it fails.
I think its because I have a different domain instance name.
My problem is: what is my domain instance name and how do I find out ?
To access gitlab I just type in localhost in the browser and besides that I think its linked to one of my emails but neither works in the command
If your connection to the web GUI is localhost so your hostname/domain is the same for ssh connection.
You should make sure to open SSH port (22) to the container when you run it.
Add to docker run command -p 2222:22 this map container port 22 SSH to host port 2222 because port 22 is taken on the host by SSH already.
Edit
Jest test it on computer
After you open port for gitlab container something like this.
docker run -dit --name gitlab -p 2222:22 -p 8080:80 gitlab/gitlab-ce
Note the -p 2222:22 that probably what you missing.
You should be able to connect using ssh with this command
ssh -T git#localhost -p 2222
Good luck.
Related
I am having problems using SSH to connect into a Docker container (from this image) running Alpine Linux 3.10.
SSH must be used for this connection, as I am using a backup software barman which requires an SSH connection to the PostgreSQL 11 database running inside the Docker container.
First I connected into the docker container using
docker exec -it <container_name> /bin/bash
then tried to reinstall and start sshd
bash-5.0# apk add openssh --no-cache
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
OK: 117 MiB in 42 packages
bash-5.0# apk add openrc --no-cache
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
OK: 117 MiB in 42 packages
bash-5.0# rc-update add sshd
* rc-update: sshd already installed in runlevel `sysinit'; skipping
bash-5.0# /etc/init.d/sshd start
* WARNING: sshd is already starting
However, I am unable to connect to the local sshd server from inside the Docker container
# ssh root#127.0.0.1
ssh: connect to host 127.0.0.1 port 22: Connection refused
Similarly, connecting into the Docker container via SSH from the Ubuntu host machine fails as well.
$ ssh postgres#172.26.0.4
ssh: connect to host 172.26.0.4 port 22: Connection refused
where 172.26.0.4 is the IP address shown from running ifconfig inside the Docker container.
Any ideas how we can solve this?
I didn't download the image you're referring to, but worked w/ the default docker alpine-image. To get ssh to run inside the container a few extra steps were required - there's a good chance that you need to take the same:
ssh-keygen -A
rc-status
touch /run/openrc/softlevel
/etc/init.d/sshd start
Might be relevant to your issue if you used --net=host along with docker run command:
We were facing similar issues with running SSH server inside our Ubuntu container.
We realized that the SSH server on the container's host was running and it used the same port that container's SSH server wanted to use.
We changed the port used by the SSH server to solve this issue.
Please note, our Docker container used --net=host and hence both container and host had the same IP address. Hence, the 2 SSH servers were fighting over a single port and that didn't allow the server inside the container to start properly.
Is there a way to bind ports to containers without passing an argument via the run command? I do not like starting my containers with the 'docker run' command so using the -p argument is not an option for me. I like to start my containers with the 'docker start containername' command. I would like to specify the hostname of the docker-server with the port number (http://dockerserver:8081) and this should then be forwarded to my container's app which is listening on port 8081. My setup is on Azure but is pretty basic so the Azure docker plugin looks a bit like overkill. I read up about the expose command but seems like you still need to use the 'docker run -p' command to get access to the container from the outside. Any suggestions would be very much appreciated.
docker run is just a shortcut for docker create + docker start. Ports need to be exposed when a container is created, so the -p option is available in docker create:
docker create -d -p 80:80 --name web nginx:alpine
docker start web
Port publishing only does ports though.
If you want the hostname passed to the container, you'll need to do it with a command option or (more likely) an environment variable - defined with ENV in the Dockerfile and passed with -e in docker create.
At the moment I'm running a node.js application inside a docker container which needs to connect to camunda, which runs in another container.
I start the containers with the following command
docker run -d --restart=always --name camunda -p 8000:8080 camunda/camunda-bpm-platform:tomcat-7.4.0
docker run -d --name app -p 3000:3000 app
Both applications are now running and I can access camunda by navigating to my host's IP on port 8000, and running wget http://localhost:8000 -q -O - also returns the camunda page. When I login to my app container with docker exec -it app sh and type wget http://localhost:8000 -q -O -, I cannot access camunda. Instead I get the following error:
wget: can't connect to remote host (127.0.0.1): Connection refused
When I link my app container to the camunda container with --link camunda:camunda, and type wget http://camunda:8000 -q -O - in my app container, I get the following error:
wget: can't connect to remote host (172.17.0.4): Connection refused`
I've seen this option, so I started my app container with --add-host camunda:my_hosts_ip and tried wget again, resulting in:
wget: can't connect to remote host (149.210.227.191): Operation timed out
When running wget http://149.210.227.191:5001 -q -O - on my host machine however, I get a correct response immediately.
Ideally I would like to just start my app container without the need to supply the external IP in any way, and let the app container just use the camunda service via the localhost or by linking the camunda container tot my app container. What would be the easiest way to achieve this?
Why does it not work?
Containers and host do not share their local IP stack. Thus, when you are within a container and try anything localhost:port the anything command will try to connect to the container-specific local IP stack, not the other container nor the host.
How to make it work?
Hard way: you either need to know the IP address of the other container and connect to this IP address..
Easier and cleaner way: .. either link your containers.
--link=[]
Add link to another container in the form of <name or id>:alias or just <name or id> in which case the alias will match the name
So you'll need to perform, assuming the camunda container is named camunda:
docker run -d --name app -p 3000:3000 --link camunda app
Then, once you docker-exec-ed into the container app you will be able to execute wget http://camunda:8080 -q -O - without error.
Note that while the linked containers graph cannot loop, e.g., camunda cannot be linked to app as you need to start a container to be able to link it, you actually do whatever you want/need playing with IP addresses.
Note also that you can specify the IP address of a container using the --ip option (though it can only be used in conjunction with --net for user-defined networks).
Original answer below. Note that link has been deprecated and the recommended replacement is network. That is explained in the answer to this question: docker-compose: difference between network and link
--
Use the --link camunda:camunda option for your app container. Then you can access camunda via http://camunda:8080/.... The link option adds a entry to the /etc/hosts file of the app container with the IP address of the camunda container. This also means you have to restart your app container if you restart the camunda container.
I have an Eclipse instance running on linux Ubuntu in a docker container. This container runs on a CentOS host with no physical display and I would like to forward X11 from the docker container to my laptop (running windows) through the CentOS host.
Docker container runs with
docker run --name docker-eclipse -p 5000:5000/tcp -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix
While I can forward X11 from the host to my laptp with no problems, I'm not able to start eclipse inside the container, because it dies with "Cannot open display:".
What I'd like is
laptop --> remote host --> docker container running eclipse
What is the best way to do that?
This might work (server is assumed to be the remote host running Docker, laptop is assumed to be the local host from which you want the GUI):
Connect to the server.
Mount through sshfs the laptop's .X11 socket from the server: user#server:$sshfs laptop:/tmp/.X11-unix /tmp/.X11-unix.
Start the container with something like user#laptop:ssh -X server docker run --name docker-eclipse -p 5000:5000/tcp -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix.
I'm not sure this would work, and it does not feel the cleanest way of doing so, but what you want to perform is quite.... unusual (though it would be something really great !!).
Comment your feedback !
I am completely stuck on the following.
Trying to setup a express app in docker on an Azure VM.
1) VM is all good after using docker-machine create -driver azure ...
2) Build image all good after:
//Dockerfile
FROM iojs:onbuild
ADD package.json package.json
ADD src src
RUN npm install
EXPOSE 8080
CMD ["node", "src/server.js"]
Here's where I'm stuck:
I have tried all of the following plus many more:
• docker run -P (Then adding end points in azure)
• docker run -p 80:8080
• docker run -p 80:2756 (2756, the port created during docker-machine create)
• docker run -p 8080:80
If someone could explain azure's setup with VIP vs internal vs docker expose.
So at the end of all this, every port that I try to hit with Azure's:
AzureVirtualIP:ALL_THE_PORT
I just always get back a ERR_CONNECTION_REFUSED
For sure the express app is running because I get the console log info.
Any ideas?
Thanks
Starting from the outside and working your way in, debugging:
Outside Azure
<start your container on the Azure VM, then>
$ curl $yourhost:80
On the VM
$ docker run -p 80:8080 -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f4d98b9c75 laslo:latest node src/server.js 5 seconds ago up 5 seconds 0.0.0.0:80->8080 something_funny
$ curl localhost:80
That 0.0.0.0:80->8080 shows you that your port forwarding is in effect. If you run other containers, don't have the right privileges or have other networking problems, Docker might give you a container without forwarding the ports.
If this works but the first test didn't, then you didn't open the ports to your VM correctly. It could be that you need to set up the Azure endpoint, or that you've got a firewall running on the VM.
In the container
$ docker run -p 80:8080 --name=test -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker exec it test bash
# curl localhost:8080
In this last one, we get inside the container itself. Curl might not be installed, so maybe you have to apt-get install curl first.
If this doesn't work, then your Express server isn't listening on port 80, and you need to check the setup.