Hyperledger Composer - connection issue when using TLS - hyperledger-fabric

i'm having issues deploying composer on top of a multi org, multi peer network. My network has two CA's, one order and six peers (two per org).
The network uses TLS, which is giving me some issues. When running
composer network ping -n network2 -p org1 -i user -s pass
i am receiving SSL errors;
E0913 16:54:49.855499904 120141 ssl_transport_security.c:921] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.
E0913 16:54:49.864638248 120141 ssl_transport_security.c:921] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.
E0913 16:54:49.865108661 120141 ssl_transport_security.c:921] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.
E0913 16:54:49.865506771 120141 ssl_transport_security.c:921] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.
Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
Command failed
Here is my connection file;
{
"type": "hlfv1",
"name": "org1",
"orderers": [
{ "url" : "grpcs://localhost:7050",
"cert" : "-----BEGIN CERTIFICATE-----removed-----END CERTIFICATE-----\n"
}
],
"ca": { "url": "http://localhost:7054",
"name": "ca_peerOrg1",
"trustedRoots": [""],
"verify": true
},
"peers": [
{
"requestURL": "grpcs://localhost:7051",
"eventURL": "grpcs://localhost:7053",
"cert" : "-----BEGIN CERTIFICATE-----removed-----END CERTIFICATE-----\n"
},
{
"requestURL": "grpcs://localhost:8051",
"eventURL": "grpcs://localhost:8053",
"cert" : "-----BEGIN CERTIFICATE-----removed-----END CERTIFICATE-----\n"
}
],
"keyValStore": "/home/paul/.composer-credentials",
"channel": "mychannel",
"mspID": "Org1MSP",
"timeout": "300",
"globalcert": "",
"maxSendSize": -1,
"maxRecvSize": -1
}
The value of cert matches the contents of the .pem file used to start the CA (sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem)
Any ideas how i get this working with TLS? none of the composer commands are working, they are all giving me the same errors

If you used cryptogen to generate your certificates then there will be tls folders for your organisations which will contain the public certificate you need to put into the connection profile. The certificate you use for the CA configuration is not the correct certificate to use.

Related

Issue with Keycloak and nestjs

I have been trying to include Keycloak authentication with my NestJS app and this is driving me crazy. I keep getting an error
"WARN [Keycloak] Cannot validate access token: Error: Grant validation failed. Reason: failed to load public key to verify token. Reason: connect ECONNREFUSED ::1:8080"
My Keycloak.json file is:
{
"realm": "my-realm",
"auth-server-url": "http://localhost:8080/",
"ssl-required": "external",
"resource": "test",
"verify-token-audience": false,
"credentials": {
"secret": "my-secret"
},
"policy-enforcer": {}
}
This is being imported in Apps.module.ts as:
KeycloakConnectModule.register('./dist/keycloak.json', {
policyEnforcement: PolicyEnforcementMode.PERMISSIVE,
tokenValidation: TokenValidation.ONLINE,
}),
I am using Keycloak version 19.0.1 and nest-key cloak-connect v 1.9.0.
When I tried debugging. Grant-manager.js's public key is undefined. I checked with the well-known config and jwks-uri was defined as:
http://localhost:8080/realms/my-realm/protocol/openid-connect/certs
Any ideas on what might be wrong?

Log all failed attempts in testcafe quarantine mode?

I have quarantine mode enabled in my testcafe configuration.
"ci-e2e": {
"browsers": [
"chrome:headless"
],
"debugOnFail": false,
"src": "./tests/e2e/*.test.ts",
"concurrency": 1,
"quarantineMode": true,
"reporters": [
{
"name": "nunit3",
"output": "results/e2e/testResults.xml"
},
{
"name": "spec"
}
],
"screenshots": {
"takeOnFails": true,
"path": "results/ui/screenshots",
"pathPattern": "${DATE}_${TIME}/${FIXTURE}/${TEST}/Screenshot-${QUARANTINE_ATTEMPT}.png"
},
"video": {
"path": "results/ui/video",
"failedOnly": true,
"pathPattern": "${DATE}_${TIME}/${FIXTURE}/${TEST}/Video-${QUARANTINE_ATTEMPT}"
}
},
Now when some attempt fails I have entry in log (nunit xml logfile) with information about failed runs and only one stack-trace. I have screenshot for each failed run.
<failure>
<message>
<![CDATA[ ❌ AssertionError: ... Run 1: Failed Run 2: Failed Run 3: Failed ]]>
</message>
<stack-trace>
here we have stack-trace for only one failed run
</stack-trace>
</failure>
I want to have log entry with stack-trace for each failed run for each failed test. Is it possible to configure testcafe this way? If not what I need to do?
There is a mistake in the config file. The name of the option for reporters should be reporter, but it is reporterS. It means that Testcafe doesn't use these reporters at all and maybe now you just see an outdated file with results.

Failed executing transaction to invoke HyperLedger Fabric function using Kaliedo FabConnect API

I've been using Kaleido's FabConnect API to invoke some transactions from a sample fabric smart contract using this request:
curl -X 'POST'
'https://u0jzrmv8ok-u0nh6n12o1-connect.us0-aws-ws.kaleido.io/transactions?fly-sync=true'
-H 'accept: /'
-H 'Content-Type: application/json'
-d '{
"headers": {
"type": "SendTransaction",
"signer": "user2",
"channel": "ustrades",
"chaincode": "asset_transfer"
},
"func": "GetAllAssets",
"args": [
"string"
],
"init": false
}'
but I get the following error: {
"error": "Failed to submit: error getting channel response for channel [ustrades]: Discovery status Code: (11) UNKNOWN. Description: error received from Discovery Server: failed constructing descriptor for chaincodes:<name:"asset_transfer" > "
}
I've seen a similar problem where the solution offered was to add anchor peer nodes, but how exactly do you do that on Kaleido. Their customer support is slow getting back to me, so I thought I'd ask here.

Wolkenkit fails to start with "Error: Failed to get lowest processed position."

I am currently looking into Wolkenkit by following the tutorial to create a chat application.
After finishing writing the code and I ran sudo yarn wolkenkit start. This gave me the following error message:
Waiting for https://localhost:3000/ to reply...
(node:11226) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.
Error: Failed to get lowest processed position.
at EventSequencer.getLowestProcessedPosition (/wolkenkit/eventSequencer/EventSequencer.js:71:13)
at /wolkenkit/app.js:63:41
at process._tickCallback (internal/process/next_tick.js:68:7)
Application code caused runtime error.
✗ Failed to start the application.
A bit above the error the command warns about:
▻ Application certificate is self-signed.
I would appreciate any help on how to solve this and get the demo application to run on my local machine.
My development machine is running Debian GNU/Linux 10 with
Node 13.8.0
Yarn 1.21.1
Docker 18.09.1
Wolkenkit 3.1.2
Because of the warnings, I suspect this could be related to the X.509 certificate used for TLS. I created it using openssl like follows:
$ openssl req -new -sha256 -nodes -out localhost.csr -newkey rsa:2048 -keyout localhost.key -config <(
cat <<-EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C=US
ST=New York
L=Rochester
O=Somthing
OU=Something Else
emailAddress=test#example.com
CN = localhost
[ req_ext ]
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = localhost
EOF
)
$ openssl x509 -req -days 365 -in localhost.csr -signkey localhost.key -sha256 -out localhost.crt
Then I moved the localhost.crt and localhost.key into the following structure:
server/keys/localhost
├── certificate.pem
└── privateKey.pem
And set up a package.json like this:
{
"name": "chat",
"version": "0.0.0",
"wolkenkit": {
"application": "chat",
"runtime": {
"version": "3.1.0"
},
"environments": {
"default": {
"api": {
"address": {
"host": "localhost",
"port": 3000
},
"certificate": "/server/keys/localhost",
"allowAccessFrom": "*"
},
"fileStorage": {
"allowAccessFrom": "*"
},
"node": {
"environment": "development"
}
}
}
},
"dependencies": {
"wolkenkit": "^3.1.2"
}
}
Seems like this could be the same problem described here in this Github issue.
The problem is that due to a change in the start command, we now
assume that there must be a read model (which has not yet been
defined, if you follow the guide).
If you simply ignore this error, and follow on, the next thing is to
define the read model. Once you have done that, you can successfully
run wolkenkit start.

Error while sending query request from client : No peer available to query

I am getting the following error while sending query request from my client.
FabricError: No peers available to query. Errors: ["Failed to connect before the deadline
URL:grpcs://localhost:12051","Failed to connect before the deadline
URL:grpcs://localhost:11051"].
Following is my the part of my connection-org3.json connection profile file
"organizations": {
"Org3": {
"mspid": "Org3MSP",
"peers": [
"peer0.org3.bc4scm.de",
"peer1.org3.bc4scm.de"
],
"certificateAuthorities": [
"ca.org3.bc4scm.de"
]
}
},
"peers": {
"peer0.org3.bc4scm.de": {
"url": "grpcs://localhost:11051",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/org3.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"grpcOptions": {
"ssl-target-name-override": "peer0.org3.bc4scm.de"
}
},
"peer1.org3.bc4scm.de": {
"url": "grpcs://localhost:12051",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/supplier.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"grpcOptions": {
"ssl-target-name-override": "peer1.org3.bc4scm.de"
}
}
},
"certificateAuthorities": {
"ca.org3.bc4scm.de": {
"url": "https://localhost:9054",
"caName": "ca-supplier",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/org3.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"httpOptions": {
"verify": false
}
}
}
And following is a part of my docker composer file.
peer0.org3.bc4scm.de:
container_name: peer0.org3.bc4scm.de
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org3.bc4scm.de
- CORE_PEER_ADDRESS=peer0.org3.bc4scm.de:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer0.org3.bc4scm.de:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.bc4scm.de:12051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org3.bc4scm.de:11051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer0.org3.bc4scm.de/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer0.org3.bc4scm.de/tls:/etc/hyperledger/fabric/tls
- peer0.org3.bc4scm.de:/var/hyperledger/production
ports:
- 11051:11051
peer1.org3.bc4scm.de:
container_name: peer1.org3.bc4scm.de
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org3.bc4scm.de
- CORE_PEER_ADDRESS=peer1.org3.bc4scm.de:12051
- CORE_PEER_LISTENADDRESS=0.0.0.0:12051
- CORE_PEER_CHAINCODEADDRESS=peer1.org3.bc4scm.de:12052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:12052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org3.bc4scm.de:11051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org3.bc4scm.de:12051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer1.org3.bc4scm.de/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/supplier.bc4scm.de/peers/peer1.org3.bc4scm.de/tls:/etc/hyperledger/fabric/tls
- peer1.org3.bc4scm.de:/var/hyperledger/production
ports:
- 12051:12051
I got this code from Fabcar sample and tried to query from a client in Org3 instead of Org1. I created an admin user and then created a user in this organization successfully. According to my observations, I am getting the error from following code line execution.
const result = await contract.evaluateTransaction('queryAllProducts','123');
What is the possible reason for this issue? Appreciate your insights on this.
Updates:
I checked opened ports in peer0.prg3.bs4scm.de
root#e52992a76c3d:/opt/gopath/src/github.com/hyperledger/fabric/peer# netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:9443 0.0.0.0:* LISTEN 1/peer
tcp 0 0 127.0.0.11:46353 0.0.0.0:* LISTEN -
tcp6 0 0 :::11051 :::* LISTEN 1/peer
tcp6 0 0 :::6060 :::* LISTEN 1/peer
tcp6 0 0 :::11052 :::* LISTEN 1/peer
Here I can see ports 11051 and 11052 are open and listening.
Also, there is a container for the installed chain code.
cd0b165e5186 dev-peer0.org3.bc4scm.de-scmlogic-1.0-9c7e776aa8a752e530f79d0b456f1bda28aac3f5db0af734be2f315d8d1a4f53 "/bin/sh -c 'cd /usr…" 48 seconds ago Up 47 seconds dev-peer0.org3.bc4scm.de-scmlogic-1.0
When I look at the logs of that peer(peer0.org3) I can see floowing error log is print continuously. It is complaining about the connection with org1
019-07-06 10:26:52.278 UTC [gossip.discovery] expireDeadMembers -> WARN 164 Exiting
2019-07-06 10:26:56.381 UTC [gossip.comm] func1 -> WARN 165 peer1.org1.bc4scm.de:8051, PKIid:42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a9d0441d772925882d4457a7 isn't responsive: EOF
2019-07-06 10:26:56.381 UTC [gossip.discovery] expireDeadMembers -> WARN 166 Entering [42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a9d0441d772925882d4457a7]
2019-07-06 10:26:56.381 UTC [gossip.discovery] expireDeadMembers -> WARN 167 Closing connection to Endpoint: peer1.org1.bc4scm.de:8051, InternalEndpoint: , PKI-ID: 42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a
You could check, if peer is accessible even using browser(Firefox). request on firefox - localhost:11051 if you could see the response means your peer is accessible or if not means your port is not open for the same, then go to the docker file and open the port for the same, and up the peer using docker compose , do the same for every peer you want to access.
Even you could check the logs of peers using following -
docker logs --follow peer0.org3.bc4scm.de
Update : ---
You could check CORE_PEER_GOSSIP_BOOTSTRAP & CORE_PEER_GOSSIP_EXTERNALENDPOINT for both peers
**CORE_PEER_GOSSIP_BOOTSTRAP=<a list of peer endpoints within the peer's org>
CORE_PEER_GOSSIP_EXTERNALENDPOINT=<the peer endpoint, as known outside the org>**
for peer0.org3.bc4scm.de
CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org3.bc4scm.de:12051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.bc4scm.de:11051
for peer1.org3.bc4scm.de :
CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org3.bc4scm.de:11051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org3.bc4scm.de:12051
Check Port accordingly your peers & up your docker file.
This could be due to multiple reasons:
Your peers are not accessible so first check if these ports are open or not.
You should confirm if the chaincode is installed on these peers or not.
If these are not the cases then you must check the logs inside the docker containers of the chaincode and these peers and for that you can use:
docker exec -it [container-name] bash
Do tell me if you find something there and you can't resolve it.
I had this same problem and realized the issue was I ha.d set the "asLocalhost" property to false and was trying to access peers at http://localhost/. Below is the working line with the property set correctly. (I pulled from an example using fabcar, which was great otherwise).
await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } });

Resources