Vendored Chaincode has false dependencies - hyperledger-fabric

I have chaincode with the following directory structure
$GOPATH/myproject/chaincode/mycc/go
├── mycc.go
├── chaincode
│   └── chaincode.go
└── vendor
├── github.com
├── ...
Because of my usage of hyperledgers cid package, I use vendoring and have the vendor directory next to the chaincode. Now for testablitiy, mycc.go only includes the main function:
package main
import (
"myproject/chaincode/mycc/go/chaincode"
"github.com/hyperledger/fabric/core/chaincode/shim"
)
func main() {
err := shim.Start(new(chaincode.MyChaincode))
if err != nil {
logger.Error(err.Error())
}
}
The chaincode.go implements the rest of the chaincode, including the MyChaincode struct with Init, Invoke, etc. The relevant imports are identical to the one in the mycc.go:
"github.com/hyperledger/fabric/core/chaincode/shim"
During the instantiation of the chaincode, something with the dependencies seems to be mixed up, because I receive the error message:
*chaincode.MyChaincode does not implement "chaincode/mycc/go/vendor/github.com/hyperledger/fabric/core/chaincode/shim".Chaincode (wrong type for Init method)
have Init("chaincode/mycc/go/vendor/myproject/chaincode/mycc/go/vendor/github.com/hyperledger/fabric/core/chaincode/shim".ChaincodeStubInterface) "chaincode/approvalcc/go/vendor/ma/chaincode/approvalcc/go/vendor/github.com/hyperledger/fabric/protos/peer".Response
want Init("chaincode/mycc/go/vendor/github.com/hyperledger/fabric/core/chaincode/shim".ChaincodeStubInterface) "chaincode/mycc/go/vendor/github.com/hyperledger/fabric/protos/peer".Response
So clearly it seems that the import in the inner chaincode package is resolved wrongly with the vendor directory appearing twice in the path.

The fabric-ccenv container which builds chaincode attempts to be "helpful" but including shim in the GOPATH inside the container. It also ends up including the shim/ext/... folders as well but unfortunately does not actually properly include their transitive dependencies.
When you combine this with how the chaincode install/package commands also attempt to be helpful and your attempt to vendor, things got ugly.
I actually just pushed a fix targeted for 1.4.2 to address the fabric-ccenv issue.

It seems like your init method is not initialized properly so please check if the chaincode is installed or instantiated properly or not. That you can check out just by looking out for the instantiated chaincode docker container.

Related

After packaging chaincode, install command giving error reading as gzip stream: gzip: invalid header

Hyperledger-Fabric V2.3.x
Peer V2.3.3
Go V1.16
Full Error:
Error: chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not parse as a chaincode install package: error reading as gzip stream: gzip: invalid header
My Setup:
CORE_PEER_TLS_ROOTCERT_FILE=./crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt
CORE_PEER_MSPCONFIGPATH=./crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
ORDERER_CA=./crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
CORE_PEER_ADDRESS=peer1.org1.example.com:7051
Previously invoked command (Successfull):
peer chaincode package mycc.tar.gz -p . -n mycc --lang golang -v 1.0
Command (for which I got error):
peer lifecycle chaincode install mycc.tar.gz
*Please comment if any other information required
go.mod (used to install dependecies)
module github.com/chaincode
go 1.16
require (
github.com/golang/protobuf v1.3.2 // indirect
github.com/hyperledger/fabric-chaincode-go v0.0.0-20200424173110-d7076418f212 // indirect
github.com/hyperledger/fabric-contract-api-go v1.1.0
github.com/hyperledger/fabric-protos-go v0.0.0-20200424173316-dd554ba3746e // indirect
github.com/stretchr/testify v1.5.1 // indirect
golang.org/x/tools v0.1.7 // indirect
)
chaincode.go (few lines)
package chaincode
import (
"encoding/json"
"fmt"
"github.com/hyperledger/fabric-contract-api-go/contractapi"
)
// SmartContract provides functions for managing an Asset
type SmartContract struct {
contractapi.Contract
}
// Asset describes basic details of what makes up a simple asset
//Insert struct field in alphabetic order => to achieve determinism accross languages
// golang keeps the order when marshal to json but doesn't order automatically
type Asset struct {
AppraisedValue int `json:"AppraisedValue"`
Color string `json:"Color"`
ID string `json:"ID"`
Owner string `json:"Owner"`
Size int `json:"Size"`
}
The problem was with the packages I installed using command:
GO111MODULE=on go mod vendor
I defined wrong version of go 1.14 in "go.mod" instead of 1.16 because the version of go installed was "1.16".
(The incompatible packages were installed, so the package file generated was not a correct gzip file)
Solution:
Delete vendor folder
write go.mod with correct go version OR use go init (https://golang.org/doc/tutorial/create-module)
run GO111MODULE=on go mod vendor
run peer lifecycle package command
run peer lifecycle install command

terraform init not working when specifying modules

I am new to terraform and trying to fix a small issue which I am facing when testing modules.
Below is the folder structure I have in my local computer.
I have below code at storage folder level
#-------storage/main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my-first-terraform-bucket" {
bucket = "first-terraform-bucket"
acl = "private"
force_destroy = true
}
And below snippet from main_code level referencing storage module
#-------main_code/main.tf
module "storage" {
source = "../storage"
}
When I am issuing terraform init / plan / apply from storage folder it works absolutely fine and terraform creates the s3 bucket.
But when I am trying the same from main_code folder I am getting the below error -
main_code#DFW11-8041WL3: terraform init
Initializing modules...
- module.storage
Error downloading modules: Error loading modules: module storage: No Terraform configuration files found in directory: .terraform/modules/0d1a7f4efdea90caaf99886fa2f65e95
I have read many issue boards on stack overflow and other github issue forums but did not help resolving this. Not sure what I am missing!
Just update the existing modules by running terraform get --update. If this not work delete the .terraform folder.
I agree the comments from #rclement.
Several ways to troubleshooting terraform issues.
Clean .terraform folder and rerun terraform init.
This is always the first choice. But it takes time when you run terraform init next time, it starts installing all providers and modules again.
If you don't want to clean .terraform to save the deployment time, you can run terraform get --update=true
Most case is, you did some changes in modules, and it need be refreshed.
I had a similar issue but the problem for me was, The module I have created was looking for the providers.tf so had to add it for the modules as well and it worked.
├── main.tf
├── modules
│   └── droplets
│   ├── main.tf
│   ├── providers.tf
│   └── variables.tf
└── variables.tf
So my providers was present in the root locations previous which modules could not use so the issue for me.

Hyperledger Fabric can't find go files when building chaincode

Problem
I'm getting the following error on a fabric-peer: Failed to generate platform-specific docker build: Error returned from build: 1 "can't load package: package chaincodes/simple: no buildable Go source files in /chaincode/input/src/chaincodes/simple.
Context
I'm trying to instantiate a chaincode package after having successfully installed it.
Both the install and instantiate proposals are created by the Fabric NodeJS SDK (fabric-client).
Steps leading up to the problem
Package a go file using the fabric-client (succeeds)
Create an install proposal and send it to the peer (succeeds
Create and instantiate proposal and send it to the peer (fails with above error-message)
Steps taken to solve the problem
I tried to assert how the chaincode container create process works by reading the code.
What I got from it was the following:
- The chaincode is build using the fabric-ccenv image
- It loads a .tar as an Inputstream (The package?)
I tried adding the files to the go-path but I still couldn't get it to work.
What I want to know
- Where does the chaincode building process expect these files to be?
- Why do I need to provide the files when I've previously sent a package of chaincode inside an InstallRequest?
Further information
I'm also getting an error about an MSP being unknown. Something along the lines of: Error: MSP Org1MSP is unkown. This happens during deserialization of the proposal.
Which is weird because I'm 100% that MSP exists. What I'm not certain about is whether I need to add anchor peers to the channel I'm installing and instantiating the chaincode on in order for the MSP to be found.
I thought that happens during channel creation.
Versions
This happens in the following versions:
- 1.0.0
- 1.0.6
Please do not suggest I try version 1.1 because I cannot upgrade easily.
Please advise.
Above was caused by an actual unknown MSP.
Double check the profile/profiles in configtx.yaml used for creating the channel and the genesis block for orderers. I had a mismatch between those.
I read a related issue [FAB-7952] in Fabric's issue manager and it made me think something else was going on, instead of an actual unknown MSP.

How to get JSF working with spring-boot app in docker container?

I have an app built with spring-boot 1.4.0 and JSF 2.2 (Mojarra 2.2.13) running on embedded tomcat (8.5).
When I run the app locally, it works fine - regardless of whether I start it from my IDE, with mvn spring-boot:run or java -jar target/myapp.jar
However, when I start the app inside a docker container (using docker 1.12.1) everything works except for the JSF pages. For example the spring-boot-actuator pages like /health work just fine but when trying to access /home.xhtml (a JSF page) I get:
Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Wed Oct 12 09:33:33 GMT 2016
There was an unexpected error (type=Not Found, status=404).
/home.xhtml Not Found in ExternalContext as a Resource
I use io.fabric8:docker-maven-plugin to build the docker image (based on alpine with openjdk-8-jre), which includes the build artifact under /maven inside the docker container and java -jar /maven/${project.artifactId}-${project.version}.jar to start the app.
I am using port 9000 instead of the standard tomcat port (8080) and this is also exposed by the docker image.
I already compared both the local jar and the one included in the docker container and they both have the same content.
Any help would be appreciated.
Thanks in advance
XHTML files and the rest of the JSF files was placed in the wrong directory.
At least when using <packaging>jar</packaging> it appears that all JSF stuff must be placed under src/main/resources/META-INF instead of src/main/webapp where most IDEs will automatically place it.
When JSF stuff is in src/main/webapp it does not get included (or not in the right place) into the spring-boot-repackaged jar file.
My src/main/resources looks like this now:
src
└── main
   └── resources
├── application.yml
└── META-INF
   ├── faces-config.xml
   └── resources
   ├── administration
      │   └── userlist.xhtml
       ├── css
      │   └── default.css
      ├── images
      │   └── logo.png
      └── home.xhtml
I've had exactly the same issue. Running the Spring Boot application directly from maven or from a packaged WAR file worked fine, but running it from a Docker container lead to the same problem.
Actually, the problem for me was that accessing dynamic resources (JSPs or the like) always lead to trying to show the error page, which could not be found and therefore a 404 was returned, but static content was returned correctly.
It sounds silly, but it turned out for me that it matters whether you call the packaged WAR file ".war" or ".jar" when you start it. I used a Docker example for Spring Boot which (for some reason) names the WAR file "app.jar". If you do that, you run into this issue in the Docker container. If you call it "app.war" instead, it works fine.
This has nothing to do with Docker, by the way. The same happens if you try to start the application directly on the host when it has a .jar extension (unless you start it in a location where the needed resources are lying around in an extracted manner).
Starting my application with
java -Duser.timezone=UTC -Djava.security.egd=file:/dev/./urandom -jar application.jar
leads to the error. Renaming from .jar to .war and using
java -Duser.timezone=UTC -Djava.security.egd=file:/dev/./urandom -jar application.war
made it work.
What worked for me was copying the webapp folder to META-INF while adding resources
<resource>
<directory>src/main/webapp</directory>
<targetPath>META-INF/resources</targetPath>
</resource>
and then add change the documentRoot to the path where it is available in the docker container:
#Bean
public WebServerFactoryCustomizer<ConfigurableServletWebServerFactory>
webServerFactoryCustomizer() {
return factory -> factory.setDocumentRoot(new File("/workspace/META-INF/resources"));
}
now i can use jsf pages while using spring-boot:build-image and jar packaging
ps: i still had a bunch of exceptions, which where gone after upgrading spring boot+liquibase to newest version, but i think this does not belong to this topic

Puppet classes with environment directories

I am new to puppet and would like to avoid some of the common issues that I see and get away from using import statements since they are being deprecated. I am starting with very simple task of creating a class that copies a file to a single puppet agent.
So I have this on the master:
/etc/puppet/environments/production
/etc/puppet/environments/production/modules
/etc/puppet/environments/production/mainfests
/etc/puppet/environments/production/files
I am trying to create node definitions in a file called nodes.pp in the manifests directory and use a class that I have defined (class is test_monitor) in a module called test:
node /^web\d+.*.net/ {
include test_monitor
}
However when I run puppet agent -t on the agent I get :
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class test_monitor for server on node server
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
What is the proper way to configure this to work. I would like to have node definitions in a file or files which can have access to classes I build in custom modules.
Here is my puppet.conf:
[main]
environmentpath = $confdir/environments
default_manifest = $confdir/environments/production/manifests
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
factpath=$vardir/lib/facter
[master]
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
I know this is probably something stupid that I am not doing correctly or have mis-configured but I cant seem to get it to work. Any help is appreciated!! To be clear I am just trying to keep things clean and have classes in separate files with specific node types also in their own files. I have a small to medium to size environment. (approx 150 servers in a data center)
Let me guess, maybe the test module has wrong structure. You need some subfolders and files under folder modules
└── test
├── files
├── manifests
│   ├── init.pp
│   └── monitor.pp
└── tests
└── init.pp
I recommend change from test_monitor to test::monitor, it makes sense for me, if you need use test_monitor , you need a test_monitor module or test_monitor.pp file.
node /^web\d+.*.net/ {
include test::monitor
}
Then put monitor tasks in monitor.pp file
And that was as simple as adding the proper module path to puppet.conf
basemodulepath = $confdir/environments/production/modules

Resources