This project was scheduled for deletion, but failed with the following message: Failed to open TCP connection to <host>:5000 - gitlab

The project has this message:
This project was scheduled for deletion, but failed with the following
message: Failed to open TCP connection to host.ru:5000 (Connection
refused - connect(2) for "host.ru" port 5000)
Can you tell me what this might be related to? Why does gitlab use a different port for deletion ?
(default port is 30443)
How do I delete this message?
A lot of questions, but I really don't understand what this message is. clearly this is an error :)
Gitlab is located in docker.
message
P.S. Now I check whether the port is open.
UP! If you don't need the container register, then disable it. This will solve the problem.

Issue:
gitlab is running in a separate docker container and registry is running in a separate docker container. gitlab container is not able to resolve the dns name for the registry and gives the error
"this project was scheduled for deletion, but failed with the following message: failed to open tcp connection to registry:5000 (getaddrinfo: name or service not known)"
Solution:
docker inspect (registry container name)
E.g. docker inspect registry
get the IP address of the registry container
Login to gitlab container machine :
E.g. docker exec -it gitlab bash
Edit the hosts file
vi /etc/hosts
Add the ip address and dns name mapping for the runner container
in the hostfile
172.xx.x.1 registry
This will resolve this issue.
no restarts required.

Related

Docker: Error starting userland proxy: Bind for 0.0.0.0:50000: unexpected error Permission denied on Azure VM

I'm new to Docker so please be kind but I am testing it out on a Windows 10 image on Azure (I know I could run it directly but I wanted to try it in a VM first).
I have a fresh Windows 10 image that I have installed Docker for Windows 2.0.0 on.
Note: I did not tick the option to use Windows containers instead of linux containers.
Once it installed (and rebooted) I was prompted to install Hyper-V and Containers features (causing restarts).
Once it was all installed I open an Administrative PowerShell window to download Jenkins:
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
This gave me the error:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_lederberg (deaba2deeea0486c92ba8a1a32740295f03859b1b5829d39e39eff0b24613ebf): Error starting userland proxy: Bind for 0.0.0.0:50000: unexpected error Permission denied.
I thought this was strange as 50000 wasn't a port that I expected to be in use, changing this to different ports (50001) produced the same error.
Running:
netstat -a -n -o
Showed that the port was not in use.
If I remove -p 50000:50000 from the command it can bind and start Jenkins but I assume it needs this port mapping to work correctly.
Previous posts have suggested stopping the World Wide Web Publishing service but that isn't installed.
There are no other running Docker containers.
I assume the port is in use or something is stopping the port mapping.
Assuming a user has permission to create a port binding from their terminal are there any other techniques beside netstat to determine if something is bound to a port - either something internal to docker's own checking process or something at the host OS level?
Rather embarrassingly this worked this morning with no changes other than the VM was shutdown over the weekend.
Maybe all it needed was a reboot?

Rancher - standard_init_linux.go:190: exec user process caused "permission denied"

Actually I'm trying to deploy Kubernetes via Rancher on a single server.
I created a new Cluster and added a new node.
But after a several time, an error occurred:
This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready.
[controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [172.26.116.42]: Get https://localhost:6443/healthz: dial tcp [::1]:6443: connect: connection refused, log: standard_init_linux.go:190: exec user process caused "permission denied"
And when I'm checking my docker container, one of them is always restarting, the rancher/hyperkube:v1.11.3-rancher1
I'm run docker logs my_container_id
And I show standard_init_linux.go:190: exec user process caused "permission denied"
On the cloud vm, the config is:
OS: Ubuntu 18.04.1 LTS
Docker Version: 18.06.1-ce
Rancher: Rancher v2
Do you have any issues about this error ?
Thank a lot ;)
What is your type of architecture?
Please run:
uname --all
or
docker info | grep -i "Architecture"
to check this.
Rancher is only supported on x86.
Finally, I called the vm sub-contractor and they created a vm with a nonexec var partition.
After a remount, it's worked.

Connecting to Azure Container Services from Windows 8.1 Docker

I've been following this tutorial to set up an Azure container service. I can successfully connect to the master load balancer via putty. However, I'm having trouble connecting to the Azure container via docker.
~ docker -H 192.168.33.400:2375 ps -a
error during connect: Get https://192.168.33.400:2375/v1.30/containers/json?all=1: dial tcp 192.168.33.400:2375: connectex: No connection could be made because the target machine actively refused it.
I've also tried
~ docker -H 127.0.0.1:2375 ps -a
This causes the docker terminal to hang forever.
192.168.33.400 is my docker machine ip.
My guess is I haven't setup the tunneling correctly and this has something to do with how docker runs on Windows 8.1 (via VM).
I've created an environment variable called DOCKER_HOST with a value of 2375. I've also tried changing the value to 192.168.33.400:2375.
I've tried the following tunnels in putty,
1. L2375 192.168.33.400:2375
2. L2375 127.0.0.1:2375
3. L22375 192.168.33.400:2375
4. L22375 127.0.0.1:2375 (as shown in the video)
Does anyone have any ideas/suggestions?
Here are some screenshots of the commands I ran:
We can follow this steps to setup tunnel:
1.Add Azure container service FQDN to Putty:
2.Add private key(PPK) to Putty:
3.Add tunnel information to Putty:
Then we can use cmd to test it:

What server URL should one provide for TeamCity agent in Docker?

The problem. I am trying to create a TeamCity infrastructure (a server and an agent) on Ubuntu Linux 16.04.1 LTS using Docker. I have run a Docker container with jetbrains/teamcity-server image as described on this page. It is possible to access the TeamCity server via web browser using the IP address of the server and port 8111.
Now I try to run a Docker container with an agent as described on this page. It is written: Note that "localhost" will not generally not work as that will refer to the "localhost" inside the container. Well, when I supply "http://localhost:8111", or "http://127.0.0.1:8111", or "http://my_server_ip:8111" to the running script for the agent container I finally get 1) "WARN - buildServer.AGENT.registration - Error registering on the server via URL http://localhost:8111 (sic! always localhost). Will continue repeating connection attempts.", or 2) "WARN - buildServer.AGENT.registration - Error while asking server for the communication protocols via URL http://localhost:8111/app/agents/protocols."
Also I have tried to reveal the IP address of the Docker container running the server and supply it for the agent running script. But the result was the same.
Question. What server URL I should provide? Are there any implicit steps in the TeamCity configuration with Docker which I miss?
You can use the --link parameter to link containers:
Start your jetbrains/teamcity-server and use --name teamcity-server to give it a descriptive name
Start the agent container and use --link teamcity-server to enable connectivity to the teamcity-server container
Inside of your agent container you can now use teamcity-server as the hostname to connect to the teamcity-server container
Please also check out Docker container networking which superseded the --link feature.

Proftpd directory listing error on Docker container

I have been using proftpd on Ubuntu inside a Docker container. It logs in successfully but failed to get directory listing.
Here is the screenshot of Filezilla
And screenshot of Proftpd log file
Any help?
The problem is the proftpd advertises the internal ip address 172.... so the client cannot connect to it.
You can solve this by setting (in the proftpd.conf)
MasqueradeAddress externalIP
or by running the conatiner using:
docker run --net=host .....
This option uses the host ip network so the passive mode will work fine.
make sure to expose configured passive ports (e.g. PassivePorts 60000 65534) on the running container to allow incoming connections
Looks like the ftpd is having permissions problem changing the running user of some sort.
Try setting the ftpd to run as the user you are logging in with using dockers USER userftp (https://docs.docker.com/reference/builder/#user) in your Dockerfile.
Remember that you can make it listen on a port > 1024 and use -p 21:2121 when starting the container to make it run on port 21 out to the world.
It would be helpful if you posted the Dockerfile and configuration you are using so we can test out this ourself.

Resources