I want to run the facbar-samples on windows10,reference the http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html but get the error:
$ ./startFabric.sh
orderer.example.com is up-to-date
couchdb is up-to-date
peer0.org1.example.com is up-to-date
cli is up-to-date
2017-07-05 08:17:06.550 UTC [main] main -> ERRO 001 Cannot run peer because
cannot init crypto, missing /etc/hyperledger/fabric/C:/Program
Files/Git/etc/hyperledger/msp/users/Admin#org1.example.com/msp folder
some that I have installed:
$ npm -v
5.0.4
$ node -v
v6.11.0
$ curl -V
curl 7.54.0 (x86_64-w64-mingw32) libcurl/7.54.0 OpenSSL/1.0.2l zlib/1.2.11
libssh2/1.8.0 nghttp2/1.23.1 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3
pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: IPv6 Largefile SSPI Kerberos SPNEGO NTLM SSL libz TLS-SRP HTTP2
HTTPS-proxy Metalink
$ docker --version
Docker version 17.06.0-ce, build 02c1d87
$ docker-compose --version
docker-compose version 1.14.0, build c7bdf9e3
$ git --version
git version 2.13.1.windows.2
Please help ,Thanks.
This is a problem with mingw64 which is messing with the filepaths.
The solution is to set the following environment variable before running startFabric.sh :
export MSYS_NO_PATHCONV=1
A fix was just submitted to fabric to do that for you so if you pull the latest version from the master branch it should work. Otherwise, just set that variable and that should solve your problem.
Arnaud
The problem is incorrect path to certificates (look for windows path string between /fabric and /etc)
/etc/hyperledger/fabric/C:/ProgramFiles/Git/etc/hyperledger/msp/users/Admin#org1.example.com/msp
You can try next :
add path to certificate as environment variable to docker compose file at peer section
start network using docker-compose -f "path_to_file"
manually run instructions in startFabric.sh at peer and cli.
then you can successfully run node query.js for testing network work
The double // in the path will fix this. Update in all the places where docker exec command used
for e.g. MSPCONFIGPATH=//etc/hyperledger....
Also certificates have to be generated before you could get your network running
Do this ./byfn.sh -m down first and then run ./byfn.sh generate then ./byfn.sh -m up
This may happen possibly when the network.sh is set to down. So, try bringing it up with the -ca flags and check. Worked for me.
Before you run the fabcar-samples, I think that you have to execute some steps from the "Building Your First Network" chapter. It seems that you haven't got the required certificates to start the network. Also, you should generate the genesis block, the channel configuration transaction and the anchor peers.
You can do it by executing the ./byfn.sh -m generate command. For more information: http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html#generate-network-artifacts
I faced the same using Fabric 2.2's Test-Network. To resolve,
Start docker again
Set FABRIC_CFG_PATH, CORE_PEER_TLS_ENABLED, CORE_PEER_LOCALMSPID, CORE_PEER_TLS_ROOTCERT_FILE, CORE_PEER_ADDRESS and CORE_PEER_MSPCONFIGPATH again
Run your queries from the test-network sub-directory.
Related
I'm new to Linux, just installed Lubuntu and faced the problem -
when i'm trying to clone my remote work repo from my company's git:
$ sudo git clone https://path/to/repo.git
I keep on receiving error:
Cloning into 'repo'...
fatal: unable to access 'https://path/to/repo.git/': server certificate verification failed. CAfile: none CRLfile: none
I know it's mentioning certificates, but i do not have any. And before, i worked on windows and was able to simply git clone this repo without any certs.
This error means that the git client cannot verify the integrity of the certificate chain or root. The proper way to resolve this issue is to make sure the certificate from the remote repository is valid, and then added to the client system.
Update list of public CA
The first thing I would recommend is to simply update the list of root CA known to the system as show below.
# update CA certificates
sudo apt-get install apt-transport-https ca-certificates -y
sudo update-ca-certificates
This may help if you are dealing with a system that has not been updated for a long time, but of course won’t resolve an issue with private certs.
Fetch certificates, direct connection
The error from the git client will be resolved if you add the certs from the remote git server to the list of locally checked certificates. This can be done by using openssl to pull the certificates from the remote host:
openssl s_client -showcerts -servername git.mycompany.com -connect git.mycompany.com:443 </dev/null 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p' > git-mycompany-com.pem
This will fetch the certificate used by “https://git.mycompany.com”, and copy the contents into a local file named “git-mycompany-com.pem”.
Fetch certificates, web proxy
If this host only has access to the git server via a web proxy like Squid, openssl will only be able to leverage a squid proxy if you are using a version of OpenSSL 1.1.0 and higher. But if you are using an older version of OpenSSL, then you will need to workaround this limitation by using something like socat to bind locally to port 4443, and proxy the traffic through squid and to the final destination.
# install socat
sudo apt-get install socat -y
# listen locally on 4443, send traffic through squid "squidhost"
socat TCP4-LISTEN:4443,reuseaddr,fork PROXY:squidhost:git.mycompany.com:443,proxyport=3128
Then in another console, tell OpenSSL to pull the certificate from the localhost at port 4443.
openssl s_client -showcerts -servername git.mycompany.com -connect 127.0.0.1:4443 </dev/null 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p' > git-mycompany-com.pem
Add certificate to local certificate list
Whether by proxy or direct connection, you now have a list of the remote certificates in a file named “git-mycompany-com.pem”. This file will contain the certificate, its intermediate chain, and root CA certificate.
The next step is to have this considered by the git client when connecting to the git server. This can be done by either adding the certificates to the file mentioned in the original error, in which case the change is made globally for all users OR it can be added to this single users’ git configuration.
** Adding globally **
cat git-mycompany-com.pem | sudo tee -a /etc/ssl/certs/ca-certificates.crt
** Adding for single user **
git config --global http."https://git.mycompany.com/".sslCAInfo ~/git-mycompany-com.pem
Which silently adds the following lines to ~/.gitconfig
[http "https://git.mycompany.com/"]
sslCAInfo = /home/user/git-mycompany-com.pem
Avoid workarounds
Avoid workarounds that skip SSL certification validation. Only use them to quickly test that certificates are the root issue, then use the sections above to resolve the issue.
git config --global http.sslverify false
export GIT_SSL_NO_VERIFY=true
I know there is an answer already. Just for those who use a private network, like Zscaler or so, this error can occur if your rootcert needs to be updated. Here a solution on how this update can be achieve if using WSL on a Windows machine:
#!/usr/bin/bash
# I exported the Zscaler certifcate out of Microsoft Cert Manager. It was located under 'Trusted Root Certification > Certificates' as zscaler_cert.cer.
# Though the extension is '.cer' it really is a DER formatted file.
# I then copied that file into Ubuntu running in WSL.
# Convert DER encoded file to CRT.
openssl x509 -inform DER -in zscaler_cert.cer -out zscaler_cert.crt
# Move the CRT file to /usr/local/share/ca-certificates
sudo mv zscaler_cert.crt /usr/local/share/ca-certificates
# Inform Ubuntu of new cert.
sudo update-ca-certificates
Everything was working fine but suddenly I am getting the error:
fatal: unable to access
'https://username#bitbucket.org/name/repo_name.git/':
gnutls_handshake() failed: Handshake failed
I am getting this on my computer as well as an EC2 instance. When I tried on another computer then it is working fine there.
I have tried many solutions from Stackoverflow and from other forums. but nothing worked!
On the computer, os is Linux mint 17 and on EC2 instance, Ubuntu 14.04.6 LTS.
What can be the issue and what should I do to fix this issue?
Ran into the same issue on a server with Ubuntu 14.04, and found that on Aug 24, 2020 bitbucket.org changed to no longer allow old ciphers, see https://bitbucket.org/blog/update-to-supported-cipher-suites-in-bitbucket-cloud
This affects https:// connections to bitbucket, but does not affect ssh connections, so the quickest solution for me was to add an ssh key to bitbucket, and then change the remote from https to ssh.
The steps to change the remote I found from here, and they are essentially:
# Find the current remote
git remote -v
origin https://user#bitbucket.org/reponame.git (fetch)
origin https://user#bitbucket.org/reponame.git (push)
# Change the remote to ssh
git remote set-url origin git#bitbucket.org:reponame.git
# Check the remote again to make sure it changed
git remote -v
There is more discussion about the issue on the Atlassian forums at https://community.atlassian.com/t5/Bitbucket-questions/fatal-unable-to-access-https-bitbucket-org-gnutls-handshake/qaq-p/1468075
The quickest solution is to use SSH instead of HTTPS. I tried other ways to fix the issue but it was not working.
The following are steps to replace HTTPS from SSH:
Generate ssh key using ssh-keygen on the server.
Copy the public key from the generated id_rsa.pub file from step 1 and add it at following links depending on the repository host -
Bitbucket - https://bitbucket.org/account/settings/ssh-keys/
Github - https://github.com/settings/ssh/new
Gitlab - https://gitlab.com/profile/keys
Now run the following command to test authentication from the server command line terminal
Bitbucket
ssh -T git#bitbucket.org
Github
ssh -T git#github.com
Gitlab
ssh -T git#gitlab.com
Go to the repo directory and open .git/config file using emac or vi or nano
Replace remote "origin" URL (which starts with https) with the following -
For Bitbucket - git#bitbucket.org:<username>/<repo>.git
For Github - git#github.com:<username>/<repo>.git
For Gitlab - git#gitlab.com:<username>/<repo>.git
sudo bash
mkdir upgrade
cd upgrade
wget https://www.openssl.org/source/openssl-1.1.1g.tar.gz
tar xpvfz openssl-1.1.1g.tar.gz
cd openssl-1.1.1g
./Configure
make ; make install
cd ..
wget https://curl.haxx.se/download/curl-7.72.0.tar.gz
tar xpvfz curl-7.72.0.tar.gz
cd curl.7.72.0
./configure --with-ssl=/usr/local/ssl
make ; make install
cd ..
git clone https://github.com/git/git
cd git
vi Makefile, change prefix= line to /usr instead of home
make ; make install
I checkout project fabric-samples and run file startFabric.sh to start Fabric blockchain network. After that, I run node enrollAdmin.js to enroll the new admin
Now, I want to use the command line of fabric-ca-client to add a new user to org1. I execute the commands below:
Access to ca_peerOrg1 docker
docker exec -it ca_peerOrg1 bash
I check the value of
$FABRIC_CA_CLIENT_HOME is unset
$FABRIC_CA_HOME is /etc/hyperledger/fabric-ca-server
Go to /etc/hyperledger/fabric-ca-server directory and check command
fabric-ca-client
And run this command
fabric-ca-client enroll -u http://admin:adminpw#localhost:7054
But it occurs error below:
Anyone could help? Thanks for reading
I just encountered the same problem. For anyone who is interested, this error indicates fabric-ca-server is running with TLS enabled.
To get rid of this error, you need to make the following changes to the fabric-ca-client command:
use https instead of http in the url
use ca host name instead of localhost in the url
provide the TLS cert file for the server's listening port via --tls.certfile
e.g. fabric-ca-client enroll -u https://admin:adminpw#ca.org0.example.com:7054 --tls.certfiles /certs/ca/ca.org0.example.com-cert.pem
The TLS cert file was generated by fabric-ca-server at startup. The default file location is $FABRIC_CA_SERVER_HOME/tls-cert.pem. Otherwise, the location is specified by $FABRIC_CA_SERVER_TLS_CERTFILE or fabric-ca-server-config.yaml
Booting up a my first Hyperledger Network on OSX with.
I installed the sample files using the script
curl -sSL http://bitlyURLThatStackoverflow won't let me us | bash -s 1.2.1
The ran
./byfn.sh up -c mychannel -s couchdb
To boot up a sample network and got the error
Error: failed to create deliver client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: context deadline exceeded
Pulled the logs from orderer.example.com:7050 and got
config requires unsupported channel capabilities: Channel capability V1_3 is required but not supported: Channel capability V1_3 is required but not supported
Any suggestion on where to start debugging
If you are just getting started, you might want to move to 1.3.0 anyway as it is now generally available.
EDIT: You should now be able to just use 1.2.1 and things should work. I pushed a new v1.2.1 tag for fabric-samples. (read below for explanation).
To answer your question, the way the script works is that it assumes that both the fabric and fabric-samples repositories have tags matching the version specified for download. It turns out that there is 1.2.1 tag for fabric-samples so if you cloned it yourself you'll end up with the default (which is 1.3.0). If you want to use the 1.2.1 images, then you can simply download them and then run git checkout v1.2.0 in your clone of fabric-samples.
I created a machine via docker-machine create -d azure --azure-static-public-IP. But what I did is I intentionally changed the public IP address of that VM. With this move, I can not docker-machine ssh or any docker-machine related command. Seems like it’s still sending request to the previous public-IP. How can I change that IP and convert it to the new one? I tried docker-machine regenerate-certs and even changing the config.json but nothing going to be happened…
The only way I saw fixing this is to reverting back the previous public IP of that VM
You should be fine with a change of the IP in "config.json". For Example, if i have to change my IP on my default docker-machine, i would go here:
/Users/arne/.docker/machine/machines/default/config.json
Adjust the IP and run
docker-machine regenerate-certs myVM
This should work.
Do you mean when you run Docker-machine ssh got this error:
Error checking TLS connection: Error checking and/or regenerating the
certs: There was an error validating certificates for host
"13.91.60.237:2376": x509: certificate is valid for 40.112.218.127,
not 13.91.60.237 You can attempt to regenerate them using
'docker-machine regenerate-certs [name]'. Be advised that this will
trigger a Docker daemon restart which might stop running containers.
In my test lab, my first IP address is 40.112.218.127, then I change it to 13.91.60.237, get this error.
Then I use this command to regenerate it:docker-machine regenerate-certs jasonvmm, like this:
[root#jasoncli#jasonye jasonvmm]# docker-machine regenerate-certs jasonvmm
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
[root#jasoncli#jasonye jasonvmm]# docker-machine ssh jasonvmm
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-47-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
208 packages can be updated.
109 updates are security updates.
Last login: Fri Dec 8 06:22:09 2017 from 167.220.255.48
Also, we can use this command to check the new settings:docker-machine env jasonvmm
[root#jasoncli#jasonye jasonvmm]# docker-machine env jasonvmm
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://13.91.60.237:2376"
export DOCKER_CERT_PATH="/root/.docker/machine/machines/jasonvmm"
export DOCKER_MACHINE_NAME="jasonvmm"
# Run this command to configure your shell:
# eval $(docker-machine env jasonvmm)
Please use this script to regenerate them docker-machine regenerate-certs VMname.
Hope this helps.