I am running coreOS in EC2.
I have a nodeJS api docker image and running that in couple of ports (25001 and 25002). When I curl to them, I see proper response.
My intent is to have a HAProxy above these (run at 25000) which will load balance between these two.
So here are steps that I did:
DockerFile for HaProxy:
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
haproxy.cfg :
global
# daemon
maxconn 256
log /dev/log local0
defaults
mode http
log global
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:25000
default_backend node_api
backend node_api
mode http
balance roundrobin
server api1 localhost:25001
server api2 localhost:25002
Result:
When I run curl for individual services they work --->
curl -i localhost:25001/ping
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
Content-Length: 68
ETag: W/"44-351401c7"
Date: Sat, 06 Jun 2015 17:22:09 GMT
Connection: keep-alive
{"error":0,"msg":"loc receiver is alive and ready for data capture"}
Same works for 25002
But when I run for 25000, I get a timeout error like below:
curl -i localhost:25000/ping
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
I am wondering what am I doing wrong here? Any help would be appreciated ...
When you tell HAProxy that the back-end server is located at
server api1 localhost:25001
you're giving an address relative to the HAProxy container. But your Node servers aren't running on that container, so there's nobody at localhost.
You've got a few options here.
You could use the --link option for docker run to connect HAProxy to your two back-ends.
You could use the --net=host option for docker run and then your servers can find each other at localhost
You could provide HAProxy the address of your host as the back-end address
The first option is the most container-y, but the performance of Docker's bridged network is poor at high loads. The second option is good as long as you don't mind that you're letting everything break out of its container when it comes to the network. The third is kludgey but doesn't have the other two problems.
Docker's article on networking has more details.
Related
I have certain docker images on the docker server hosted in the corporate network. The docker machine works fine and able to execute all docker commands.
I have created the ACR repository and now want to push these docker images to ACR.
ACR is reachable from docker machine
root#artifactory:/home/administrator# curl -Is https://fo25.azurecr.io/v2/
HTTP/1.1 200 Connection established
HTTP/1.1 401 Unauthorized
Server: openresty
Date: Sat, 04 Apr 2020 12:21:29 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 149
Connection: keep-alive
Access-Control-Expose-Headers: Docker-Content-Digest
Access-Control-Expose-Headers: WWW-Authenticate
Access-Control-Expose-Headers: Link
Access-Control-Expose-Headers: X-Ms-Correlation-Request-Id
Docker-Distribution-Api-Version: registry/2.0
Strict-Transport-Security: max-age=31536000; includeSubDomains
Www-Authenticate: Bearer realm="https://fo25.azurecr.io/oauth2/token",service="fo25.azurecr.io"
X-Content-Type-Options: nosniff
X-Ms-Correlation-Request-Id: 354950c2-a4d8-40ac-9b0b-d6f197572284
Strict-Transport-Security: max-age=31536000; includeSubDomains
still not able to push these images to ACR. here is the command i used to push images and faced the following error
root#artifactory:/home/administrator# cat pass | docker login --username fo25 --password-stdin https://fo25.azurecr.io/v2/
Error response from daemon: Get https://fo25.azurecr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
not sure where is the issue. i have checked everything like username, access keys etc.
URL is also reachable as we are getting 200 access code.
do i need to increase the timeout period ?
Hope you have followed this link.
I see a similar problem in the corporate network, try different machine or different network.
Its resolved. It was solely issue of Proxy configuration. The proxy URL was pointing to wrong port for the HTTPS connection and that was the issue.
After correcting to right port, docker login was successful
I am deploying a docker container running an IceCast 2 server to an Azure App Service.
If I run the container locally I can connect a source (in my case a pure-data patch) to a live.mp3 mountpoint without a problem. However, when I try to connect from the same source to the IceCast server running on Azure I get a login failed message with an Azure specific 404 error code.
login failed!
server answered : HTTP/1.1 404 Not Found
Content-Type: text/html
Server: Microsoft-IIS/10.0
Date: Thu, 02 Apr 2020 18:02:05 GMT
Connection: close
Content-Length: 2778
I can reach the IceCast web GUI over a browser without a problem (http://...azurewebsites.net/).
Does anyone know what could be wrong? I am exposing port 8000 on my IceCast container which is bound to port 80 of the Azure app service.
I've a Python 3 Flask app running in an ECS cluster. The Flask app is configured to run in SSL mode. The app can't be accessed via the ALB Cname, as it generates connection refused as seen here -
curl -Il https://tek-app.example.com/health
curl: (7) Failed to connect to tek-app.example.com port 443: Connection refused
When the ALB is hit directly and ignoring the SSL cert exception, it works as seen here -
curl -Il -k https://tek-w-appli-1234.eu-west-1.elb.amazonaws.com/health
HTTP/2 200
date: Sun, 24 Feb 2019 14:49:27 GMT
content-type: text/html; charset=utf-8
content-length: 9
server: Werkzeug/0.14.1 Python/3.7.2
I understand the main recommendation is to run it behind a Nginx or Apache proxy and to set the X-Forward headers via their configs, but I feel this is over engineering the solution.
I've also tried enabling the following in the app -
from werkzeug.contrib.fixers import ProxyFix
...
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
...
And this fix now produces the correct source IP's in the Cloudwatch logs, but doesn't allow connections via the ALB Cname.
Is there something simple that I'm missing here?
Reply to first answer
Thank you - the Cname is pointing to the correct ALB. I ran into a similar issue two weeks back with an Apache server, and the fix was to ensure X-Forward-Proto was in use in the Apache vhosts.conf file. So I'm thinking this may be something similar.
I did it again - while developing locally I edited my /etc/hosts file to have a local entry to play with. Then when the Flask app was pushed to the cloud and tested from the same desktop, it was referencing the local DNS entry as opposed to the public equivalent, thus the connection refused. With the local entry removed, all is now working.
Summary
I've set up a registry container on a Debian host running on the default port 5000.
The Debian host runs as a virtual machine on top of a VMWare system. Port 5000 is open.
docker run -d -p 5000:5000 --name registry --restart=always registry:2
I then tagged an image for pushing to the registry
docker tag test-image localhost:5000/kp/testing:1.0.0
and tried pushing it
docker push localhost:5000/kp/testing:1.0.0
but it fails with Get http://localhost:5000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers).
The output from the registry container comes up empty. As if the request never reaches it.
What I tried
I then tried to cURL the _catalog endpoint and it just gets stuck when receiving response headers, the connection itself seems to be successful.
curl -v http://localhost:5000/v2/_catalog
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 5000 (#0)
> GET /v2/_catalog HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.52.1
> Accept: */*
>
I also tried creating a hostname for the registry on the host machine and setting that as the registry connection address but that ended in the same result.
In addition, I also tried adding the hostname to the insecure-registries array in /etc/docker/daemon.json but still ends with the same error.
I then tried setting it up with TLS using a self-signed certificate. Again, the connection seems to be established in cURL but no response headers are received.
Works remotely
Out of curiosity, I tried accessing it remotely so I cURL'ed the same address with the Debian host IP and it works!
curl -v http://<host-ip>:5000/v2/_catalog
* Trying <host-ip>...
* TCP_NODELAY set
* Connected to <host-ip> (<host-ip>) port 5000 (#0)
> GET /v2/_catalog HTTP/1.1
> Host: <host-ip>:5000
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Tue, 09 Jan 2018 07:30:30 GMT
< Content-Length: 20
<
{"repositories":[]}
To the question
It seems really unrealistic for it to not work locally on Debian as I've set it up using localhost on both a MacOS and an Arch Linux machine. I don't think the VMWare system could be interfering with local connectivity, especially if it works remotely?
Have I missed something which is preventing the registry to be accessible locally?
isnt it need to have mount point -v for storing files in local directory like:
docker run -d -p 5000:5000 -v $HOME/registry:/var/lib/registry registry:2
this way it uses the registry folder in your home directory at /var/lib/registry in the container, which is where the registry in the container will store files by default.
I have two VMs setup to learn Puppet - one running puppetserver as my master and another as just a Puppet agent for DNS.
The VMs are running in Hyper-V (Windows 10) and are on the same virtual switch.
After setting up the internal DNS server using this Puppet module - https://github.com/ajjahn/puppet-dns my second, DNS VM can no longer connect to the puppetserver. I receive this error on puppet agent -t runs:
Error: Could not request certificate: No route to host - connect(2) for "puppet.myname.homelab" port 8140
On the puppetserver I have reissued its own agent cert, which changed the cert from puppet <sha-omitted> to "puppet.myname.homelab" <sha omitted> (alt names: "DNS:puppet", "DNS:puppet.myname.homelab")
Running puppet agent -t on the puppetserver to update itself works fine post cert renewal.
I am able to successfully perform a nslookup on any of the hosts using the DNS server, and they do resolve with the new myname.homelab domain.
I still have DHCP enabled on my home router, but I have it set to be the second nameserver in /etc/resolv.conf on both VMs:
search myname.homelab
nameserver 192.168.1.107
nameserver 192.168.1.1
I am running Ubuntu 16.04 and Puppet 4 on both VMs. I have allowed port 8140 in UFW on both VMs, and have even tried disabling UFW with no luck.
I'm still learning Puppet and am a novice to networking, so any suggestions on what else to try and to point me in the right direction would be appreciated.
Thanks!
I slept on it and realized this morning that my router had reassigned my Puppetserver to a new IP, so the DNS A record for it was wrong, even though it was manually assigned in the router's DHCP.
Correcting that did the trick and now everything is working.
Same issue but another cause: the firewwall on the puppet server blocked port 8140. The can be checked on the client as follows:
$ curl -k -I https://puppet:8140
curl: (7) couldn't connect to host
After disabling the firewall on the server (e.g. systemctl stop firewalld):
$ curl -k -I https://puppet:8140
HTTP/1.1 404 Not Found
Date: Thu, 24 Oct 2019 11:27:26 GMT
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html; charset=ISO-8859-1
Content-Length: 278
Server: Jetty(9.2.z-SNAPSHOT)
which is the expected output, and also the puppet agent runs as expected.