Summary
I've set up a registry container on a Debian host running on the default port 5000.
The Debian host runs as a virtual machine on top of a VMWare system. Port 5000 is open.
docker run -d -p 5000:5000 --name registry --restart=always registry:2
I then tagged an image for pushing to the registry
docker tag test-image localhost:5000/kp/testing:1.0.0
and tried pushing it
docker push localhost:5000/kp/testing:1.0.0
but it fails with Get http://localhost:5000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers).
The output from the registry container comes up empty. As if the request never reaches it.
What I tried
I then tried to cURL the _catalog endpoint and it just gets stuck when receiving response headers, the connection itself seems to be successful.
curl -v http://localhost:5000/v2/_catalog
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 5000 (#0)
> GET /v2/_catalog HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.52.1
> Accept: */*
>
I also tried creating a hostname for the registry on the host machine and setting that as the registry connection address but that ended in the same result.
In addition, I also tried adding the hostname to the insecure-registries array in /etc/docker/daemon.json but still ends with the same error.
I then tried setting it up with TLS using a self-signed certificate. Again, the connection seems to be established in cURL but no response headers are received.
Works remotely
Out of curiosity, I tried accessing it remotely so I cURL'ed the same address with the Debian host IP and it works!
curl -v http://<host-ip>:5000/v2/_catalog
* Trying <host-ip>...
* TCP_NODELAY set
* Connected to <host-ip> (<host-ip>) port 5000 (#0)
> GET /v2/_catalog HTTP/1.1
> Host: <host-ip>:5000
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Tue, 09 Jan 2018 07:30:30 GMT
< Content-Length: 20
<
{"repositories":[]}
To the question
It seems really unrealistic for it to not work locally on Debian as I've set it up using localhost on both a MacOS and an Arch Linux machine. I don't think the VMWare system could be interfering with local connectivity, especially if it works remotely?
Have I missed something which is preventing the registry to be accessible locally?
isnt it need to have mount point -v for storing files in local directory like:
docker run -d -p 5000:5000 -v $HOME/registry:/var/lib/registry registry:2
this way it uses the registry folder in your home directory at /var/lib/registry in the container, which is where the registry in the container will store files by default.
Related
So what I mean to say is that I want to send get/post requests to invalid url (e.g. https://this-is-a-fake-url.com) I know it will give error because url does not exist but I want to know a way so that it would give a 200 response code. So that if someone use wireshark or something to capture api requests, he would see many requests all having return code 200, no matter if the link is valid or not. Is it even possible? If so, please help :)
You can start a local server, and add it into hosts file. More specific, if you are on a linux machine:
Run the following command, it will resolve this-is-a-fake-url.com to your localhost
sudo cat '127.0.0.1 this-is-a-fake-url.com' >> /etc/hosts
# In windows, write to C:\Windows\System32\drivers\etc
Start a server on your local host
sudo python3 -m http.server
# Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
Access your fake host
curl http://this-is-a-fake-url.com:8000 -v
# * Trying 127.0.0.1:8000...
# * TCP_NODELAY set
# * Connected to this-is-a-fake-url.com (127.0.0.1) port 8000 (#0)
# > GET / HTTP/1.1
# > Host: this-is-a-fake-url.com:8000
# > User-Agent: curl/7.68.0
# > Accept: */*
# >
# * Mark bundle as not supporting multiuse
# * HTTP 1.0, assume close after body
# < HTTP/1.0 200 OK
# ...
I wanted to learn docker swarm but I can't get it working on a fresh azure instance with debian 10.1 (edit: I also tried debian 9.11)
I've isolated the problem only to the following commands, which should give me a simple nginx welcome page on port 9000:
docker swarm init
docker service create --name nginx -p 9000:80 nginx
curl -vvv localhost:9000
But actually curl hangs and the service does not respond:
* Expire in 1 ms for 1 (transfer 0x5574dbd88f50)
* Expire in 1 ms for 1 (transfer 0x5574dbd88f50)
* Expire in 1 ms for 1 (transfer 0x5574dbd88f50)
* Trying ::1...
* TCP_NODELAY set
* Expire in 149998 ms for 3 (transfer 0x5574dbd88f50)
* Expire in 200 ms for 4 (transfer 0x5574dbd88f50)
* Connected to localhost (::1) port 9000 (#0)
> GET / HTTP/1.1
> Host: localhost:9000
> User-Agent: curl/7.64.0
> Accept: */*
>
Running nginx with docker run on the machine works.
Running the above commands on my windows machine with docker also works.
But as soon as I'm using docker stack deploy or docker service create I can't connect to the exposed ports.
Has this something to do with debian? My setup? Did I missed some configuration? What can I do to investigate this problem?
Docker version is 19.03.4
It may be that curl is using IPv6 and Nginx isn't configured for it. Try:
curl -vvv 127.0.0.1:9000
Use the command docker service ls to list the services. There you will find a column with the state of your service. If it is listed, then proceed to know if there is any error. Use the command docker service ps nginx and check the column Current state. If everything is healthy then check nginx logs with the command docker service logs -f nginx. Show us that, and we can continue helping.
I ran into a similar problem with curl hanging for docker swarm. I initially thought it was an issue with traefik, however I could reproduce it with a simple httpd container also. So it is probably related to ipv6 being selected by curl. Here are some options that worked
Adding -4 option to curl to force ipv4. This helped, but was intermittent.
Adding net.ipv6.conf.all.disable_ipv6 = 1 and net.ipv6.conf.default.disable_ipv6 = 1 to sudo vi /etc/sysctl.conf file and reloading it via sudo sysctl -p. Again intermittent.
Creating an overlay network docker network create -d overlay dcm-net and in the docker-compose.yml file marking this network external as follows:
networks:
dcm-net:
external: true
Option 3 worked the best for now.
I have two VMs setup to learn Puppet - one running puppetserver as my master and another as just a Puppet agent for DNS.
The VMs are running in Hyper-V (Windows 10) and are on the same virtual switch.
After setting up the internal DNS server using this Puppet module - https://github.com/ajjahn/puppet-dns my second, DNS VM can no longer connect to the puppetserver. I receive this error on puppet agent -t runs:
Error: Could not request certificate: No route to host - connect(2) for "puppet.myname.homelab" port 8140
On the puppetserver I have reissued its own agent cert, which changed the cert from puppet <sha-omitted> to "puppet.myname.homelab" <sha omitted> (alt names: "DNS:puppet", "DNS:puppet.myname.homelab")
Running puppet agent -t on the puppetserver to update itself works fine post cert renewal.
I am able to successfully perform a nslookup on any of the hosts using the DNS server, and they do resolve with the new myname.homelab domain.
I still have DHCP enabled on my home router, but I have it set to be the second nameserver in /etc/resolv.conf on both VMs:
search myname.homelab
nameserver 192.168.1.107
nameserver 192.168.1.1
I am running Ubuntu 16.04 and Puppet 4 on both VMs. I have allowed port 8140 in UFW on both VMs, and have even tried disabling UFW with no luck.
I'm still learning Puppet and am a novice to networking, so any suggestions on what else to try and to point me in the right direction would be appreciated.
Thanks!
I slept on it and realized this morning that my router had reassigned my Puppetserver to a new IP, so the DNS A record for it was wrong, even though it was manually assigned in the router's DHCP.
Correcting that did the trick and now everything is working.
Same issue but another cause: the firewwall on the puppet server blocked port 8140. The can be checked on the client as follows:
$ curl -k -I https://puppet:8140
curl: (7) couldn't connect to host
After disabling the firewall on the server (e.g. systemctl stop firewalld):
$ curl -k -I https://puppet:8140
HTTP/1.1 404 Not Found
Date: Thu, 24 Oct 2019 11:27:26 GMT
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html; charset=ISO-8859-1
Content-Length: 278
Server: Jetty(9.2.z-SNAPSHOT)
which is the expected output, and also the puppet agent runs as expected.
I am running coreOS in EC2.
I have a nodeJS api docker image and running that in couple of ports (25001 and 25002). When I curl to them, I see proper response.
My intent is to have a HAProxy above these (run at 25000) which will load balance between these two.
So here are steps that I did:
DockerFile for HaProxy:
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
haproxy.cfg :
global
# daemon
maxconn 256
log /dev/log local0
defaults
mode http
log global
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:25000
default_backend node_api
backend node_api
mode http
balance roundrobin
server api1 localhost:25001
server api2 localhost:25002
Result:
When I run curl for individual services they work --->
curl -i localhost:25001/ping
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
Content-Length: 68
ETag: W/"44-351401c7"
Date: Sat, 06 Jun 2015 17:22:09 GMT
Connection: keep-alive
{"error":0,"msg":"loc receiver is alive and ready for data capture"}
Same works for 25002
But when I run for 25000, I get a timeout error like below:
curl -i localhost:25000/ping
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
I am wondering what am I doing wrong here? Any help would be appreciated ...
When you tell HAProxy that the back-end server is located at
server api1 localhost:25001
you're giving an address relative to the HAProxy container. But your Node servers aren't running on that container, so there's nobody at localhost.
You've got a few options here.
You could use the --link option for docker run to connect HAProxy to your two back-ends.
You could use the --net=host option for docker run and then your servers can find each other at localhost
You could provide HAProxy the address of your host as the back-end address
The first option is the most container-y, but the performance of Docker's bridged network is poor at high loads. The second option is good as long as you don't mind that you're letting everything break out of its container when it comes to the network. The third is kludgey but doesn't have the other two problems.
Docker's article on networking has more details.
I've installed CouchDB on my vagrant 0.9.0 box that is running CentOS 6.2.
In Vagrantfile I've added config.vm.forward_port 5984, 5985.
After reloading vagrant i attempt to curl the address: curl -v localhost:5985 with poor results.
* About to connect() to localhost port 5985 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 5985 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:5985
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
* Closing connection #0
I get the feeling that port forwarding isn't working properly - at first I thought it might have something to do with iptables so I disabled that but, alas, results did not improve.
Been beating my head against this for days now. Would greatly appreciate some assistance.
It's quite likely that your CouchDB is listening on address 127.0.0.1 of the virtual machine (not of the physical machine). This is the default for CouchDB. Do you have the following in local.ini?
[httpd]
bind_address = 0.0.0.0
After restarting CouchDB check with netstat, on the virtual machine, if the change took effect:
sudo netstat -tlnp |grep :5984
Then check that CouchDB is running fine from the virtual machine:
curl http://127.0.0.1:5984/
If you don't see {"couchdb":"Welcome","version":"1.1.1"}, check the logs for error messages. It may be some permissions problem.
How have you installed CouchDB?
in my case, the solution to a very similar problem was much more obvious: coming from ubuntu, I didn't expect a firewall to be running on the centos box
this will disable it:
sudo service iptables stop
thanks to this blog!