Fatal error when initializing core config - hyperledger-fabric

I am a rookie in hyperledger in China.
I followed the order of the official documents
My fabric version is up to date (2020.10.10)and I attend a error when i carry out this moment:
./network.sh deployCC,
I found an answer in the official documentation, and have an error in test-network.
I carried out fabric-samples/facar/startFabric.sh and carried out peer chaincode query -C mychannel -n fabcar -c '{"Args":["queryAllCars"]}' in test-network directory.
However, there is a new problem:
fatal error when initializing core config : Could not find config file. Please make sure that FABRIC_CFG_PATH is set to a path which contains core.yaml,
I set up this path already, why?
My English is not very good, I'm sorry :(

The error is came from this file:
https://github.com/hyperledger/fabric-samples/blob/v2.1.1/test-network/scripts/deployCC.sh
Check line 16 which is "FABRIC_CFG_PATH=$PWD/../config/" and make sure you have the config folder on that directory.
Inside that config folder, you may find core.yaml, configtx.yaml, orderer.yaml.
After install samples, binaries and docker images, you can find this config folder in "fabric-samples" directory.
To install samples, binaries and docker images, go to this link:
https://hyperledger-fabric.readthedocs.io/en/release-2.2/install.html#install-samples-binaries-and-docker-images

You might have forgotten to set the environment variable. In the command line, set
export FABRIC_CFG_PATH=$PWD/config/
or wherever your config directory is relative to your current working directory and then try again.

Related

Hyperledger Fabric change go path

My go path currently points to /usr/local/go but this IS NOT where I prefer to install projects.
What is the preferred method to point to the go path to execute go but build projects from a completely different directory? Thanks
In fabric, you will install the chaincode from inside the cli container, that way don't matter specifically where your gopath is mapped on your machine.
Map your chaincode path to the container go_path:
volumes:
- ./chaincode/:/opt/gopath/src/github.com/
And then install and instantiate it.

How do I create a new project in Neos CMS?

I recently turned a computer into a Ubuntu server. I have installed all dependencies following the article below http://neos.readthedocs.io/en/stable/GettingStarted/Installation.html
My version of Ubuntu (or Apache) did not have a "htdocs" folder like the previous article suggested, so I created a folder called newsletter in var/www/html per this article https://askubuntu.com/questions/683953/where-is-apache-web-root-directory-on-ubuntu
Then I try to complete step 2 of "Fundamental Instruction" by using the following code
cd /your/htdocs/
php /path/to/composer.phar create-project neos/neos-base-distribution Neos
but it does not work.
Instead of inputting "cd /your/htdocs" I navigate to /var/www/html
I am getting "Could not open input file: /path/to/composer.phar"
I believe I already have composer installed, so I don't really want to have to go through https://www.digitalocean.com/community/tutorials/how-to-install-and-use-composer-on-ubuntu-16-04
Should I change "/path/to/composer.phar"? Has the location changed?
It look like your composer is not in the given path /path/to/composer.phar
composer is not included in a bare bone Ubuntu, so you should have to follow the setup. If you want to test Neos, check https://bitnami.com/stack/neos
Should I change "/path/to/composer.phar"? Has the location changed?
This path is just an example.

unable to generate Hyperledger network artifacts

I'm following this link to setup multi organizations for business network
I have cloned the git code and tried to setup my first network by following the instructions here
I have followed each and every step till I reach command to generate using ./byfn.sh -m generate it exists saying cryptogen tool not found which can be seen below.
text#blockchain-2:~/fabric-tools/fabric-samples/first-network$ ./byfn.sh -m generate
Generating certs and genesis block for with channel 'mychannel' and CLI timeout of '10000' seconds and CLI delay of '3' second
s
Continue (y/n)? y
proceeding ...
cryptogen tool not found. exiting
How to fix this issue and proceed further?
Assuming you already have Docker and Docker Compose installed, you will also need to install some platform specific binaries (including cryptogen). If you have already installed these binaries then adding the folder containing the binaries to PATH will help.
These 2 documents below in the Hyperledger Fabric will help with the pre-requisites, but don't run the byfn.sh script - run that script from the clone of the sstone repo, and run it with the parameters specified in the Multi-Org Composer Tutorial
There are the 2 fabric pre-req docs:
Cryptogen and other binaries and
Fabric Pre-reqs
Make sure your /bin and /first-network are in the same directory because cryptogen belongs to /bin
$ /learn/hyperledger/fabric-samples$ ls
balance-transfer chaincode-docker-devmode first-network README.md
basic-network config high-throughput scripts
bin fabcar LICENSE
chaincode fabric-ca MAINTAINERS.md

Spark Installation Problems

I am following instructions from here:
https://www.datacamp.com/community/tutorials/apache-spark-python#gs.WEktovg
I downloaded and prebuilt version of Spark , untarred it and mv it to /usr/local/spark.
According to this, this is all I should have to do.
Unfortunately, I can run the interactive shell as it cant find the file.
When i run :
./bin/pyspark
I get
-bash: ./bin/pyspark: No such file or directory.
I also notice that installing it this way does not add it to the bin directory.
Is this tutorial wrong or am I missing a trick?
You need to change your working directory to /usr/local/spark. Then this command will work.
And also, when you untar it, it usually will not add it to bin folder. You need to add it manually by adding the path to environment variables.
Update your working Directory to /usr/local/spark and execute the command. Hopefully this will fix the issue.

Process for building a package to be managed by an offline conda/puppet environment

I’m trying build a package to be managed by an offline conda environment
in Linux. I’m doing a dry run with py4j.
On my online build server:
I download the py4j recipe
And download the source distribution (py4j-0.8.2.1.tar.gz)
Copy the recipe and the source distribution to the offline puppet
server
On my offline puppet server:
tweak the recipe to point to my the copy of the source distribution.
condabuildpy4j− conda install –use-local py4j
$ conda index linux-64
conda index linux-64 writes the py4j configuration to repodata.json. I
can see py4j is in repodata.json. And there’s also a
py4j-0.8.2.1-py27_0.json created under /opt/anaconda/conda-meta/
We have a custom channel mapped to /srv/www/yum/anaconda_pkgs/
$ cat .condarc
channels:
- http://10.1.20.10/yum/anaconda_pkgs/
I can see that py4j configuration is added to the following files:
./envs/_test/conda-meta/py4j-0.8.2.1-py27_0.json
./pkgs/cache/ef2e2e6cbda49e8aeeea0ae0164dfc71.json
./pkgs/py4j-0.8.2.1-py27_0/info/recipe.json
./pkgs/py4j-0.8.2.1-py27_0/info/index.json
./conda-bld/linux-64/repodata.json ./conda-bld/linux-64/.index.json
./conda-meta/py4j-0.8.2.1-py27_0.json
Can someone explain what each of these json files is supposed to do?
I can also see that there is a repodata.json and .index.json in
/srv/www/yum/anaconda_pkgs/linux-64 that were updated but don’t have a
configuration for py4j.
I manually copied my py4j-0.8.2.1.tar.gz into my custom repo
(channel) in /srv/www/yum/anaconda_pkgs/linux-64?
I still can’t do conda install –use-local py4j from host machines or
puppet agent -t. I get the following:
err: /Stage[main]/Anaconda::Packages/Anaconda::Install_pkg[py4j]/Package[py4j]/ensure: change from absent to present failed: Execution of ‘/opt/anaconda/bin/conda install –yes –quiet py4j’ returned 1: Fetching package metadata: ..
Error: No packages found in current linux-64 channels matching: py4j
You can search for this package on Binstar with
binstar search -t conda py4j
--use-local only searches the conda-bld/linux-64 channel. If you move the package to another local channel, you will need to add it to your ~/.condarc channels as a file:// url.
Whenever you add a package to a local repo, you need to run conda index on that directory. This will regenerate the repodata.json file.
I'll answer you question about the various json files, but note that you really don't need to care about any of these.
./envs/_test/conda-meta/py4j-0.8.2.1-py27_0.json
This is a remnant from the build process. Once the package is built, it is installed into a _test environment so that the actions in the test section of your meta.yaml can be run. Each environment has a conda-meta directory that contains the metadata for each package installed in that environment.
./pkgs/cache/ef2e2e6cbda49e8aeeea0ae0164dfc71.json
Everything in the pkgs directory is a cache. This is a local cache of the channel repodata, so that conda doesn't have to redownload it when it is "fetching package metadata" if it hasn't changed.
./pkgs/py4j-0.8.2.1-py27_0/info/recipe.json
Again, this is a cache. When the p4js package is installed anywhere, it is extracted in the pkgs directory. Inside the package, in the info directory, is all the metadata for the package. This file is the metadata from the recipe that was used to create the package. Conda doesn't use this metadata anywhere, it is just included for convenience.
./pkgs/py4j-0.8.2.1-py27_0/info/index.json
This is the metadata of the package included in the package itself. It's what conda index will use to create the repodata.json.
./conda-bld/linux-64/repodata.json
This is the repo metadata for the special channel of packages you have built (the channel used with --use-local, and used by conda build automatically.
./conda-bld/linux-64/.index.json
This is a special cache file used internally by conda index.
./conda-meta/py4j-0.8.2.1-py27_0.json
This is similar to the first one. It's the environment metadata for the package that you installed into your root environment.

Resources