Minifabric problem with apiserver and fabric 2.2.1 error join on the channel in multihost configuration - hyperledger-fabric

I had created a custom network with my organizations and my peers running 100% on a host with which I was interfacing via an ApiServer based on this: https://kctheservant.medium.com/rework-an-implementation-of-api-server-for-hyperledger-fabric-network-fabric-v2-2-a747884ce3dc The problem is when I switched to using networking on 2 hosts using docker swarm. When I join the channel from the second host I get the error: "unable to contact the endpoint". So I switched to using "minifabri" which promises easy use, and in fact the network is customized in a short time, even there it gave me an error when joining the channel of the second host, solved by setting the variable EXPOSE_ENDPOINT = true. The problem is that now nmon I can no longer get my apiserver to work, what I did is (as indicated in the Readme) replace the contents of the "main.js" file with my server code and run the "apprun" command. This gives me an error if I leave the server listening on a port, while it is successful if I comment out the last 2 lines of the code. The problem is that I don't have how to query the server if I don't have a listening port. Summarizing my questions are:
how can I create an api server done like that on minifabric?
alternatively how can I solve the problem on the Fabric base (I can't find an EXPOSE_ENDPOINT variable to set), probably the problem will be the same as the one I had on minifabric. Thanks to those who will help me.

Related

Why is a Node.js 12 docker app connection to MongoDB 4 via the docker network giving a timeout while a connection via the public network works?

I'm seeing a problem I can't explain at all:
After upgrading a Meteor app to v 1.9 and therefore Node.js 12 we also have to switch docker containers to Node.js 12 based containers. In our case we use abernix/meteord:node-12-base (git).
After booting up the updated app we get a DB timeout in the docker container of the app:
/bundle/bundle/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoTimeoutError: Server selection timed out after 10000 ms
at Timeout._onTimeout (/bundle/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/sdam/topology.js:773:16)
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7) {
name: 'MongoTimeoutError',
[Symbol(mongoErrorContextSymbol)]: {}
}
This happens with the following MONGO_URL:
❌ mongodb://root:OurPw#mongo-docker-alias:27017/meteor?authSource=admin
Funnily enough when we expose the port 27017 in the MongoDB container the following MONGO_URL just works:
✔️ mongodb://root:OurPw#docker-host:27017/meteor?authSource=admin
Now I thought we are having a docker problem but if I attach to a bash inside the Node.js 12 meteord container, apt install the MongoDB shell and try to connect with:
✔️ mongo "mongodb://root:OurPw#mongo-docker-alias:27017/meteor?authSource=admin"
that also just works.
And now I'm left without a clue. I tried multiple MongoDB docker images between v4.0 and 4.2.3 as well as Node.js 12.14 and 12.10. And I also tried without MongoDB auth one time just to rule it out as the problem but the outcome is always the same.
Any idea would be very much appreciated since I'd like to avoid having to connect via an exposed port and the docker host's name because that is prone to errors obviously...
Check the /etc/mOngd.conf file for the network binding. You may need to allow it to respond on all network interfaces as the network might be a different ip/subnet when exposed (or not) which might explain why it works in some scenarios
tl;dr:
setting dns_search: '' for the backend in the docker-compose.yml (or via the docker CLI or other means) fixes the problem.
More information:
Docker seems to add any search directive of the host's /etc/resolv.conf to the container's /etc/resolv.conf per default. However resolving something like mongo-docker-alias.actual-intranet-domain.tld is likely to be problematic since the outside network and DNS has no knowledge of this subdomain. Actually we found out that it got still resolved inside the container in our case, it just took a few seconds (vs <1ms normally). And since the backend tries to establish multiple DB connection it always runs into the timeout.
Docker's DNS search option luckily allows to divert from the default behavior including setting a blank value. Knowing the problem another workaround should be to use docker aliases with a dot in it since then the search shouldn't be used, but we haven't tried that.
A few questions remain but they are not so important. Like in our case why did this happen with the Meteor update, maybe the actual reason was that also the docker version on the host changed since we wouldn't be aware of an infrastructure change. And in general why is docker adding these entries to /etc/resolv.conf? It doesn't seem very useful but if it is maybe there is in general a better approach for this?
A very helpful blog post on this matter was also published by davd.io.

Not able to create hyperledger fabric network using cello

I started cello and i am able to open user dashboard on 8081 but not the operator dashboard (I checked the port and found docker-proxy named service is running). Now on user dashboard i am trying to create a new network. But it's throwing error Apply Chain myorg fail. I have attached the screenshot. . I checked the post request reponse and it shows {"success":false,"message":"System maintenance, please try again later!"}
The Cello system is suggested to be deployed on multiple servers, at least 1 Master Node + 1 Worker Node.
Make sure that you have done installation of worker
http://hyperledger-cello.readthedocs.io/en/latest/installation_worker_docker/
Then make sure you added worker hosts in operator dashboard and chains are active.
http://localhost:8080/view/hosts (Add hosts)
http://localhost:8080/view/clusters?type=active (Check chains active or not)
For more information on cello installation refer: Hyperledger Cello Installation

Confluence in Docker can't see PostgreSQL in Docker

I'm trying to set up both Confluence and PostgreSQL in Docker. I've got them both up and running on my fully up to date CentOS 6 machine, with volume-mapping to the host file system so I can back them up easily. I can connect to PostgreSQL using pgAdmin from another machine just fine, and I can get into Confluence from a browser from that same machine. So, basically, both apps seem to be running as expected inside their respective containers and are accessible to the outside world, which of course eliminates a whole bunch of possibilities for my issue.
And that issue is that Confluence can't talk to PostgreSQL during initial setup, which is necessary for it to function. I'm getting connection failed errors (to be specific: "Can't reach database server or port : SQLState - 08001 org.postgresql.util.PSQLException: The connection attempt failed").
PostgreSQL is using the default 5432 port, which of course is exposed, otherwise I wouldn't be able to connect to it via pgAdmin, and of course I know the ID/password I'm trying is correct for the same reason (and besides, if it was an auth problem I wouldn't expect to see this error message). When I try to configure the database connection during Confluence's initial setup, I specify the IP address of the host machine, just like from pgAdmin on the other machine, but that doesn't work. I also tried some things that I basically knew wouldn't work (0.0.0.0, 127.0.0.1 and localhost).
I'm not sure what I need to do to make this work. Is there maybe some special method to specify the IP to a container from the same host machine, some nomenclature I'm not aware of?
At this point, I'm "okay" with Docker in terms of basic operations, but I'm far from an expert, so I'm a bit lost. I'm also not a big-time *nix user generally, though I can usually fumble my way through most things... but any hints would be greatly appreciated because I'm at a loss right now otherwise.
Thanks,
Frank
EDIT 1: As requested by someone below, here's my pg_hba.conf file, minus comments:
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
try changing the second line of the pg_hba.conf file to the following:
host all all 0.0.0.0/32 trust
this will cause PostgreSQL to start accepting calls from any source address. Since a docker container is technically not operating on localhost but on its own ip, the current configuration causes PostgreSQL to block any connections to it.
Also check if confluence is searching for the database on localhost. If that is the case change that to the ip of the hostmachine within the docker network.
Success! The solution was to create a custom network and then use the image name in the connection string to PostreSQL container from Confluence container. In other words, I ran this:
docker network create -d bridge docker-net
Then, on both of the docker run commands for the PostgreSQL and Confluence containers, I added:
--network=docker-net
That way, when I ran through the Confluence configuration wizard, when it asked for the hostname for the PostgreSQL server, I used postgres (the name I gave the container) rather than an IP address or actual hostname. Docker makes that work thanks to the custom network. This also leaves the containers available via the IP of the host machine, so for example I can still connect to PostgreSQL via 192.168.123.12:5432, and of course I can launch Confluence in the browser via 192.168.123.12:8080.
FYI, I didn't even have to alter the pg_hba.conf file, I just used the official PostgreSQL image (latest) as it was, which is ideal.
Thanks very much to RSloeserwij for the suggestions... while none of them proved to be the solution I needed, they did put me on the right track in the Docker docs, which, after some reading, led me to understand a few things I didn't before and figure out the config magic I needed.

Why am I getting SSL_read errors and Rpc_client_frag_read errors when trying to Remote Desktop

I'm trying to set up a remote desktop session for monitoring specific systems at my place of work. I only have access to a Linux machine and I need to connect via a terminal server gateway. I am using FreeRDP to do this and i am using the following command to create the connection:
xfreerdp /d:** /u:***** /p:******* /g:******.************.***
/v:****.*********.***** /port:3389 /size:1920x1080
I have hidden all connection details per my supervisors request however both he and I verified the correct information is entered into the fields.
When I send the connection through I get the following error:
Connected to ******.************.***:443
Connected to ******.************.***:443
TS Gateway Connection Success
Got stub length 4 with flags 3 and called 7
Got stub length 4 with flags 3 and called 6
SSL_read: I/O error: connection reset by peer (104)
Rpc_client_frag_read: error reading header
Would anyone have any idea of what I might be missing? I have even tried adding
/sec:rdp
to the script and even that produced the same error
Try rdp from a Windows system (or have someone else try from their system, since you don't have direct access to Windows). I know it won't solve your problem, but it may give you better information. I'm in a similar situation and got the same error message. I tried remmina instead of xfreerdp and got even less information than xfreerdp spits out.
From a Windows VM, at least I could tell when I got my domain\username & password right -- it told me my account was not allowed rdp access to that server. I'm figuring that means that there are accounts that can rdp in, but mine is not among them. Along the way, though, I found that the remote was using a certificate from an untrusted authority, which was useful information for my case.
If your Linux is old or hasn't been updated, do so. Your certificate store may be out of date. But it may also be that your company's Windows domain has certificates that Linux doesn't know about. It could be a simple matter that you're lacking the company-supplied cert (because they push it to all Windows machines on the domain, but your Linux machine doesn't get that "benefit").

Node.js -Couchbase - Warning: We are having troubles communicating to the indexer process. The information might be stale

I'm using Couchbase Version 4.1.0-5005 Enterprise Edition (build-5005) and trying to run bikeShop sample but it's not showing the index.
gives me this error
We are having troubles communicating to the indexer process. The information might be stale
i opened all ports (for test) as told in this tutorial but it's get the same error
My OS is win 7
what should i do? thanks.
The reason could be that ports are not opened on the server or on any of the cluster servers.
Try accessing
http://SERVER_ADDRESS:9102/getIndexStatus with basic auth and the server should return a error message.
Try opening ports 9100, 9101, 9102 and 9999 and then retry with the same url and you will get a success message with list of indexes at couchbase server.
Do you have a single node cluster?
Did you checked all of the services (Data,Index,Query) in your node?
Thanks,
Roi
For those are working on CB v5.1, try accessing "curl -v -u [user]:[pwd] http://hostname:8091/indexStatus"

Resources