Unable to enroll Fabric client as admin - Amazon Managed Blockchain - hyperledger-fabric

I'm following the AWS supply chain workshop. I created an EC2 instance and set up a VPC just like the workshop said. Now I'm connected to the EC2 instance using SSH and I've already downloaded the required packages, setup Docker, downloaded fabric-ca-client. My problem is configuring the fabric-ca client.
When I run the command fabric-ca-client enroll with the required params/flags, it retuns the following error: Error: Failed to create default configuration file: Failed to parse URL 'https://$USER:=9_phK63?#$CA_ENDPOINT': parse https://user:password#ca_endpoint: invalid port ":=9_phK63?" after host
Here's the complete command I'm trying to run: fabric-ca-client enroll -u https://$USER\:$PASSWORD#$CA_ENDPOINT --tls.certfiles ~/managedblockchain-tls-chain.pem -M admin-msp -H $HOME
I'm wondering if the ? in the password is causing the problem. If so, where can I change it?
Workshop link for reference: https://catalog.us-east-1.prod.workshops.aws/workshops/ce1e960e-a811-475f-a221-2afcf57e386a/en-US/02-set-up-a-fabric-client/05-configure-client/06-create-fabric-admin

my name is Forrest and I am a Blockchain Specialist Solutions Architect at AWS. I'd be happy to help you with this.
When using passwords with special characters, these need to be URL-encoded. For example, $ equates to %24. As OP mentioned in comments below, there is a Javascript method encodeURIComponent() that can serve this function. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent
Please make sure your environment variables are all still correctly set as well:
echo $USER
echo $PASSWORD
echo $CA_ENDPOINT
Your CA endpoint should resolve to something like:
ca.m-XXXXXXXXXXXXX.n-XXXXXXXXXXXXXX.managedblockchain.<AWS_REGION>.amazonaws.com:30002

Related

fabric-ca-client enroll with error Failed to read response of request: POST http://localhost:7054

I checkout project fabric-samples and run file startFabric.sh to start Fabric blockchain network. After that, I run node enrollAdmin.js to enroll the new admin
Now, I want to use the command line of fabric-ca-client to add a new user to org1. I execute the commands below:
Access to ca_peerOrg1 docker
docker exec -it ca_peerOrg1 bash
I check the value of
$FABRIC_CA_CLIENT_HOME is unset
$FABRIC_CA_HOME is /etc/hyperledger/fabric-ca-server
Go to /etc/hyperledger/fabric-ca-server directory and check command
fabric-ca-client
And run this command
fabric-ca-client enroll -u http://admin:adminpw#localhost:7054
But it occurs error below:
Anyone could help? Thanks for reading
I just encountered the same problem. For anyone who is interested, this error indicates fabric-ca-server is running with TLS enabled.
To get rid of this error, you need to make the following changes to the fabric-ca-client command:
use https instead of http in the url
use ca host name instead of localhost in the url
provide the TLS cert file for the server's listening port via --tls.certfile
e.g. fabric-ca-client enroll -u https://admin:adminpw#ca.org0.example.com:7054 --tls.certfiles /certs/ca/ca.org0.example.com-cert.pem
The TLS cert file was generated by fabric-ca-server at startup. The default file location is $FABRIC_CA_SERVER_HOME/tls-cert.pem. Otherwise, the location is specified by $FABRIC_CA_SERVER_TLS_CERTFILE or fabric-ca-server-config.yaml

Hyperledger fabric: fabric-ca request register failed with errors [[{"code":20,"message":"Authorization failure"}]]

I am trying to create a new identity with this command: composer identity issue -c admin#siemens-network -f administrator1.card -u Administrator1 -a "resource:org.siemens.Administrator#001"
But I get the following output:
Issue identity and create Network Card for: Administrator1
✖ Issuing identity. This may take a few seconds...
Error: fabric-ca request register failed with errors [[{"code":20,"message":"Authorization failure"}]]
Command failed
I already restarted the fabric but it still doesn't work
Please check admin#siemens-network card has existed
composer card list
If you do not have this card, access the folder containing the createPeerAdminCard.sh file and run
./createPeerAdminCard.sh
Hope it helps you.
I deleted all cards, restarted the network and reimported all cards. Now it's working

How to create database and user in influxdb programmatically?

In my use case I am using single ec2 instance [not a cluster]. I want to create a database and an user with all privileges programmatically? Is there a config file which I can edit and copy to the right location after influxdb is installed.
Could someone help me with this?
There isn't any config option that you can use to do that with InfluxDB itself. After starting up an instance you can use the InfluxDB HTTP to create the users. The curl command to do so would be the following:
curl "http://localhost:8086/query" --data-urlencode "q=CREATE USER myuser WITH PASSWORD 'mypass' WITH ALL PRIVILEGES"
Just run this command for each of the users you'd like to create. After that, you'll need to enabled the auth value of the [http] section of the config.
you can use ansible to setup influxb with your own recipe.
here's the ansible module documentation that you can use
http://docs.ansible.com/ansible/influxdb_database_module.html
or, any config/deploy manager that you prefer. i'd do this anyday instead of some ssh script or who knows what.
https://forge.puppet.com/tags/influxdb
chef.
https://github.com/bdangit/chef-influxdb
and also, you can use any of the above config managers to provision/manipulate your ec2 instance(s).
Use the admin token and this command (InfluxDB 2.3 CLI)
.\influx.exe user create -n yourusername -p yourpassword -o "your org name" --token admintokengoeshere

Assign Role to AWS EC2 Cluster via spark script

I'm not able to assign a role to a ec2 cluster via the spark script spark/ec2/spark-ec2. I use the following command to start the cluster:
where myprofile is a testing profile with sufficient permissions.
./spark-ec2 -k <key name> -i <aws .pem file> -s 2 -r eu-west-1 launch mycluster --instance-type=m4.large --instance-profile-name=myprofile
I can see the instances in the ec2 console where they also have the correct role assigned.
I then proceed to ssh into the master instance with:
./spark-ec2 -k <key name> -i <aws .pem file> login mycluster
and with
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/myprofile
I can view my temporal security key, access key and a security token. However, running
aws s3 list-buckets
returns
"Message": "The AWS Access Key Id you provided does not exist in our records."
Retrieving the keys via the curl command and pass them to boto does not work either, giving a '403 permission denied'..
Am I missing something?
Please see this very similar question below. But as I am not allowed to comment there and I neither have the answer to it I made a new question. Maybe someone could comment to that person with a link to my question. Thanks.
Running Spark EC2 scripts with IAM role
Ok I had this problem for 3 days and now I solved directly after posting the question... do:
sudo yum update
will update the aws cli and after that roles seems to work.
I can even do in python:
from boto.s3.connection import S3Connection
conn = S3Connection()
bucket = conn.get_bucket('my_bucket')
keys = bucket.list()

WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required;

I have installed OpenStack following this.
I am trying to install Savanna following the tutorial from here
When I run this command
savanna-venv/bin/python savanna-venv/bin/savanna-api --config-file savanna-venv/etc/savanna.conf
I get this error: -
WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint (7944) wsgi starting up on <IP>
Try connecting to the database:
mysql -u usernam -p
then do use mysql
and then select user,host from user and check host and users assigned in the output. Revert with the screen shot to make it more clear
Also share entries of files /etc/hosts

Resources