Unable to connect to the discovered orderer orderer0.example.com:7050 - hyperledger-fabric

I am unable to invoke transaction. I am getting below error
Unable to connect to the discovered orderer orderer0.example.com:7050
66f6b9d9d7c0 hyperledger/fabric-orderer:2.1 "orderer" About an hour ago Up About an hour 0.0.0.0:7050->7050/tcp, :::7050->7050/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp orderer.example.com
cacd16bca285 hyperledger/fabric-orderer:2.1 "orderer" About an hour ago Up About an hour 7050/tcp, 0.0.0.0:8050->8050/tcp, :::8050->8050/tcp, 0.0.0.0:8444->8443/tcp, :::8444->8443/tcp orderer2.example.com
8ba79e9b4d95 hyperledger/fabric-orderer:2.1 "orderer" About an hour ago Up About an hour 7050/tcp, 0.0.0.0:9050->9050/tcp, :::9050->9050/tcp, 0.0.0.0:8445->8443/tcp, :::8445->8443/tcp orderer3.example.com
This is how my docker containers look like. What am I missing?
I can see 7050 port mapped to all three orderers. I tried to change crypto-config.yaml but network crashed.
I tried to add ports below each hosts.
Specs:
- Hostname: orderer
SANS:
- "localhost"
- "127.0.0.1"
- Hostname: orderer2
SANS:
- "localhost"
- "127.0.0.1"
- Hostname: orderer3
SANS:
- "localhost"
- "127.0.0.1"
EDIT:
I saw a response to similar issue. The response is like:
What I suspect has happened is that, even though you have changed the port mappings between your local machine and the Docker network, the orderer is still listening on port 7050 within your Docker network.
The discovery.asLocalhost connection option is there to support the scenario where the blockchain network is running within a Docker network on the client's local machine, so it causes any discovered hostnames to be treated as localhost, but it leaves the discovered port numbers unchanged. So, when using the discovery.asLocalhost option, the port numbers that nodes are listening on within the Docker network must be mapped to the same port numbers on the local machine.
If you want to change the port numbers then you need to change them on the actual nodes themselves, not just in your Docker network mappings.
Since I am new to Blockchain.I could not understand his response. Should I add orderer.example.com into /etc/hosts?

I feel that you can try following steps:
First of all in docker container names their is no orderer0.example.com, hence use the correct name.
Secondly, in configtx.yaml try to use the following convention in
OrdererOrg
OrdererEndpoints:
- orderer.example.com:7050
- orderer2.example.com:8050
- orderer3.example.com:9050
If the above solutions doesn't work, we would be needing more info to solve the issue

Related

Conflicting port 80 on nginx and apache Docker containers

Is there any way to stop 2 conflicting docker containers on the same machine since they are both using the internal?! or container port 80?
I have one ngnix docker that runs on port 9000 from the browser.
916aa1f58ca3 nginx:1.21.6-alpine "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 0.0.0.0:9000->80/tcp
and a second Apache2 that is mapped to external port 88.
4a3ba2c4847b apache-master_php "docker-php-entrypoi…" 3 hours ago Up 3 seconds 80/tcp, 0.0.0.0:88->88
They both run together except when I send a request to the apache machine. Accessing the apaches index.html on port 88 on the browser crashes the machine.
Is there a way around this?
docker-compose.yml looks like this:
apache:
ports:
- 88:88
volumes:
- ./src:/var/www/html/
for ngnix:
nginx:
ports:
- "$SENTRY_BIND:80/tcp"
image: "nginx:1.21.6-alpine"
Conflict can never occur between containers, as docker will assign above containers to a private network created specifically for these services and each container will have its own private ip.
docker configuration should be as below, assuming apache is running using the default configurations
apache:
ports:
- 80:88
volumes:
- ./src:/var/www/html/
your configurations is telling docker to map container port 88 to host port 88 which is wrong.
I think you should read the docs
https://docs.docker.com/network/

My express https server works locally but not in a docker container

I currently have two docker containers running:
ab1ae510f069 471b8de074c4 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3001->3001/tcp hopeful_bassi
2d4797b77fbf 5985005576a6 "nginx -g 'daemon of…" 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_cori
One is my client and the other (port 3001) is my server.
The issue I'm facing is I've just added SSL to my site, but now I can't access the server. My theory is that the server needs both port 443 and port 3001 open, but I can't have port 443 open on both containers. I can run the server via HTTP locally, so I think that also points to this conclusion.
Is there anything I can do to have both using https? The client won't talk to the server if the server uses http (for obvious reasons).
Edit:
I'm now not sure if it is to do with port 443, as I killed my client and tried to just run the server, but it still gave me connection refused:
docker run -dit --restart unless-stopped -p 3001:3001 -p 443:443 471b8de074c4
If you open the port 443 for a docker container, it means that a docker-managed tool will be started. This (anyways, highly sup-optimal) tool will forward the TCP requrest to your host port 443 to the container.
If you want two containers to use the port 443, docker would want to start this portforwarder twice, on the same port. As your docker output shows, it could happen only once. Maybe digging (deeply) in the (nearly non-existent) docker logs, you can see also the relevant error message.
The problem you've found is not docker-dependant, it is the same problem what you would face also in a container-less environment - you simply can't start multiple service processes listening on the same TCP port.
Also the solution is coming from the world before the containers.
You need a central proxy service, this would listen on your port 443, and forward the requests - depending on the asked virtualhost - to the corresponding container.
Dig into the docker containers, it is nearly sure that such a https forward proxy exists. This third container will forward the requests where you want. Of course you will need to configure it.
From that moment, you don't even need to have https in your containers (although you can if you want), what helps a lot in productive, correctly certified ssl environments. Only your proxy will need the certificates. So:
/---> container A (tcp:80)
tcp:443 -- proxy
\---> container B (tcp:80)

Connection timeout when installing chaincode using fabric-sdk-go

I have a problem that there is always a grpcs timeout when installing the chaincode using fabric-sdk-go. The GRCPS request is made from the local machine to its docker containers.
ErrorMsg:
lscc.getinstalledchaincodes failed: SendProposal failed: Transaction processing for endorser [localhost:7051]: Endorser Client Status Code: (2) CONNECTION_FAILED. Description: dialing connection timed out [localhost:7051]
ENV:
Mac OSX
docker version: 18.03.1-ce
docker-compose version 1.21.1, build 5a3f1a3
fabric-sdk-go: master
The local fabric network is set up by the official fabric-ca example.
docker-compose.yaml: Gist
local network-config.yaml: Gist
client go app: Gist
Is there anything wrong with my network-config.yaml???
What I've tried:
Tried to disable CORE_PEER_TLS_CLIENTAUTHREQUIRED in docker-compose.yaml, failed..
Edited /etc/hosts file with the line 127.0.0.1 peer1-xiaoyudian..., failed..
Increased the peer.timeout.connections and others timeout options in network-config.yaml, failed..
Increased the grpcOptions.keep-alive-time in network-config.yaml, failed..
Changed the host of peers.xxxx.url from localhost to the domain in network-config.yaml, failed...
Added the entityMathcers in network-config.yaml, failed...
Failed....
Answer:
Someone from the rocket.chat told me:
run: export GRPC_GO_LOG_SEVERITY_LEVEL=error
run: export GRPC_GO_LOG_VERBOSITY_LEVEL=2
in the client code add this line:
grpclog.SetLogger(logger)
And the log says it a certificate issue for handshaking with peers.
Refer https://github.com/hyperledger/fabric-sdk-go/tree/master/test/fixtures/config/overrides
for how URLs are overridden to use localhost. In your case, you have to use local_entity_matchers.yaml & local_orderers_peers_ca.yaml combined in samples provided.
One more thing I noticed in your network-config.yaml, mapped host name is same as actual peer name. Entity matcher doesn't kick in here. Refer the entity matchers used in the sample given above.
You could try change localhost to domain in network. Ex: with peer: localhost -> peer1-xiaoyudian... with orderer: localhost -> orderer1-themis... same with ca, and use entity matcher to map peer name, orderer, ca with your ip address.

Resolving subdomain.localhost doesn't work with docker swarm

What I want to achieve:
docker swarm on localhost
dockerized reverse proxy which would forward subdomain.domain to container with app
container with app
What I have done:
changed /etc/hosts that now looks like:
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
set up traefik to forward word.beluga to specific container
What is the problem:
can't get to container via subdomain. It works if I use port though
curl gives different results for subdomain and port
The question:
what is the problem? and why?
how to debug it and find whether problem is docker or network based? (how to check whether request even got to my container)
I'll add that I have also tried to do it on docker-machine (virtualbox) but it wasn't working. So I have moved to localhost, but as you can see it didn't help a lot.
I am losing hope, so any hint would be appreciated. Thank you in advance
There’s no such thing as subdomains of localhost. By near-universal convention, localhost resolves to the IPv4 address 127.0.0.1 and the IPv6 address ::1.
You can still test virtual host with docker, but you will have to use port:
curl -H Host:sub.localhost http://localhost:8000
Late to respond but I was able to achieve this using Traefik's 2.x routers feature like so:
labels:
- "traefik.http.routers.<unique name>.rule=Host(`subdomain.localhost`)"
in docker-compose file
version: '3.9'
services:
app:
image: myapp:latest
labels:
- "traefik.http.routers.myapp.rule=Host(`myapp.localhost`)"
reverse-proxy:
image: traefik:v2.4
command: --api.insecure=true --providers.docker
ports:
- "80:80"
# The Web UI (enabled by --api.insecure=true)
- "9000:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
I think the reason, why it works, is that Treafik is intercepting everything on localhost and just then applying rules so its Traefik specific answer.

How to use port forwarding to connect to docker container using DNS name

I have 2 redis containers running on same machine m1.
container1 has port mapping 6379 to 6400
docker run -d -p 6379:6400 myredisimage1
container2 has port mapping 6379 to 7500
docker run -d -p 6379:7500 myredisimage2
I am looking for a solution where other machine m2 can communicate to machine m1, using different DNS names but same port number.
redis.container1.com:6379
redis.container2.com:6379
and I would like to redirect that request to proper containers inside machine m1.
Is this possible to achieve this ?
This is possible, but hacky. First, ask yourself if you really need to do this, or if you can get away with just using different ports for the containers. Anyway, if you do absolutely need to do this, here's how:
Each docker container gets its own ip address accessible from the host machine. AFAIK, these are generated pseudo-randomly at run-time, but they are accessible by doing a docker inspect $CONTAINER_ID, for example:
docker inspect e804af2472ca
[
{
"Id": "e804af2472ca605dec0035f45d3bd05c1fbccee31e6c09381b0c16657378932f",
"Created": "2016-02-02T21:34:12.49059198Z",
...
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
**"IPAddress": "172.17.0.6"**,
"IPPrefixLen": 16,
"IPv6Gateway": "",
...
}
}
]
In this case, we know this container's ip address accessible from the host is 172.17.0.1. That ip address is fully usable from the host, so you can have something proxy redis.container1.com to it and redis.container2.com to your other ip. You'd need to reload the proxy address every time the box goes up, so this would definitely not be ideal, but it should work.
Again, my recommendation overall is don't do this.
I'm not sure if I'm getting you right.
But how could you start two container with both working on the same port?
It seems to me that this should be dealt with by using load balancer. Try HAProxy and set up two acl's for each domain name.
I would go with something like this: (Using docker-compose)
Docker Copose setup to deploy docker images:
redis-1:
container_name: redis-1
image: myredis
restart: always
expose:
- "6400"
redis-2:
container_name: redis-2
image: myredis
restart: always
expose:
- "6400"
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 500
ports:
- "6379:6379"
links:
- redis-1:redis.server.one
- redis-2:redis.server.two
volumes:
- /path/to/my/haproxy.cfg:/etc/haproxy/haproxy.cfg
And then custom haproxy config:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers AES256+EECDH:AES256+EDH:AES128+EDH:EECDH:!aNULL:!eNULL:!LOW:!DES:!3DES:!RC4
spread-checks 4
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
defaults
mode http
balance roundrobin
option dontlognull
option dontlog-normal
option redispatch
maxconn 5000
timeout connect 10s
timeout client 25s
timeout server 25s
timeout queue 30s
timeout http-request 10s
timeout http-keep-alive 30s
# Stats
stats enable
stats refresh 30s
stats hide-version
frontend http-in
bind *:6379
mode tcp
acl is_redis1 hdr_end(host) -i redis.server.one
acl is_redis2 hdr_end(host) -i redis.server.two
use_backend redis1 if is_redis1
use_backend redis2 if is_redis2
default_backend redis1
backend redis1
server r1 redis.server.one:6379
backend redi2
server r2 redis.server.two:6379

Resources